Springer Nature is making SARS-CoV-2 and COVID-19 research free. View research | View latest news | Sign up for updates

Min-max Extrapolation Scheme for Fast Estimation of 3D Potts Field Partition Functions. Application to the Joint Detection-Estimation of Brain Activity in fMRI

  • 166 Accesses

  • 10 Citations


In this paper, we propose a fast numerical scheme to estimate Partition Functions (PF) of symmetric Potts fields. Our strategy is first validated on 2D two-color Potts fields and then on 3D two- and three-color Potts fields. It is then applied to the joint detection-estimation of brain activity from functional Magnetic Resonance Imaging (fMRI) data, where the goal is to automatically recover activated, deactivated and inactivated brain regions and to estimate region-dependent hemodynamic filters. For any brain region, a specific 3D Potts field indeed embodies the spatial correlation over the hidden states of the voxels by modeling whether they are activated, deactivated or inactive. To make spatial regularization adaptive, the PFs of the Potts fields over all brain regions are computed prior to the brain activity estimation. Our approach is first based upon a classical path-sampling method to approximate a small subset of reference PFs corresponding to prespecified regions. Then, we propose an extrapolation method that allows us to approximate the PFs associated to the Potts fields defined over the remaining brain regions. In comparison with preexisting methods either based on a path-sampling strategy or mean-field approximations, our contribution strongly alleviates the computational cost and makes spatially adaptive regularization of whole brain fMRI datasets feasible. It is also robust against grid inhomogeneities and efficient irrespective of the topological configurations of the brain regions.

This is a preview of subscription content, log in to check access.

Figure 1
Figure 2
Figure 3
Figure 4
Figure 5
Figure 6
Figure 7
Figure 8
Figure 9


  1. 1.

    Here, irregular refers to equally spaced Cartesian graphs of non-parallepipedic shape.

  2. 2.

    \(\lim_{\beta\rightarrow +\infty} \mathcal{A}_\mathcal{T}(\beta,\mathcal{G}_p)=0\).

  3. 3.

    in 2D, the right is connected to the left and the top to the bottom.


  1. 1.

    Risser, L., Vincent, T., Ciuciu, P., & Idier, J. (2009). Robust extrapolation scheme for fast estimation of 3D Ising field partition functions. Application to within subject fMRI data analysis. In 12th Proc. MICCAI, LNCS (Vol. 5761, pp. 975-983). London, UK.

  2. 2.

    Vincent, T., Risser, L., & Ciuciu, P. (2010). Spatially adaptive mixture modeling for analysis of fmri time series. IEEE Transactions on Medical Imaging, 29(4), 1059–1074.

  3. 3.

    Thirion, B., Flandin, G., Pinel, P., Roche, A., Ciuciu, P., & Poline, J.-B. (2006). Dealing with the shortcomings of spatial normalization: Multi-subject parcellation of fMRI datasets. Human Brain Mapping, 27(8), 678–693.

  4. 4.

    Hammersley, J. M., & Clifford, P. (1990). Markov random fields in statistics. In G. R. Grimmett & D. J. A. Welsh (Eds.), Disorder in physical systems: A volume in honour of John M. Hammersley (pp. 19–32). Oxford University Press.

  5. 5.

    Meng, X. L., & Wong, W. H. (1996). Simulating ratios of normalizing constants via a simple identity: A theoretical exploration. Statistica Sinica, 6, 831–860.

  6. 6.

    Gelman, A., & Meng, X.-L. (1998). Simulating normalizing constants: From importance sampling to bridge sampling to path sampling. Statistical Science, 13, 163–185.

  7. 7.

    Jerrum, M., & Sinclair, A. (1993) Polynomial-time approximation algorithms for the Ising model. SIAM Journal on Computing, 22, 1087–1116.

  8. 8.

    Trillon, A., Idier, J., & Peureux, P. (2008). Unsupervised Bayesian 3D reconstruction for non-destructive evaluation using gammagraphy. In EUSIPCO 16th European Signal Processing Conference. Lausanne, Switzerland.

  9. 9.

    Forbes, F., & Peyrard, N. (2003). Hidden Markov random field model selection criteria based on mean field-like approximations. IEEE Transactions on Pattern Analysis and Machine Intelligence, 25(9), 1089–1101.

  10. 10.

    Swendsen, R. H. & Wang, J. S. (1987). Nonuniversal critical dynamics in Monte Carlo simulations. Physical Review Letters, 58, 86–88.

  11. 11.

    Risser, L., Idier, J., Ciuciu, P., & Vincent, T. (2009). Fast bilinear extrapolation of 3D ISING field partition function. Application to fMRI image analysis. In Proc. IEEE ICIP. Cairo, Egypt.

  12. 12.

    Onsager, L. (1944). A two-dimensional model with an order-disorder transition. Physical Review, 65(3&4), 117–149.

  13. 13.

    Higdon, D. M., Bowsher, J. E., Johnson, V. E., Turkington, T. G., Gilland, D. R., & Jaszczak, R. J. (1997). Fully Bayesian estimation of Gibbs hyperparameters for emission computed tomography data. IEEE Transactions on Medical Imaging, 16(5), 516–526.

  14. 14.

    Vincent, T., Ciuciu, P., & Idier, J. (2007). Spatial mixture modelling for the joint detection-estimation of brain activity in fMRI. In 32th Proc. IEEE ICASSP (Vol. I, pp. 325–328). Honolulu, Hawaii.

  15. 15.

    Makni, S., Idier, J., Vincent, T., Thirion, B., Dehaene-Lambertz, G., & Ciuciu, P. (2008). A fully Bayesian approach to the parcel-based detection-estimation of brain activity in fMRI. NeuroImage, 41(3), 941–969.

  16. 16.

    Ciuciu, P., Poline, J.-B., Marrelec, G., Idier, J., Pallier, Ch., & Benali, H. (2003). Unsupervised robust non-parametric estimation of the hemodynamic response function for any fMRI experiment. IEEE Transactions on Medical Imaging, 22(10), 1235–1251.

  17. 17.

    Higdon, D. M. (1998). Auxiliary variable methods for Markov chain Monte Carlo with applications. Journal of the American Statistical Association, 93(442), 585–595.

  18. 18.

    Green, P. J. & Richardson, S. (2002). Hidden Markov models and desease mapping. Journal of the American Statistical Association, 97(460), 1–16.

  19. 19.

    Smith, M., Pütz, B., Auer, D., & Fahrmeir, L. (2003). Assessing brain activity through spatial Bayesian variable selection. NeuroImage, 20, 802–815.

  20. 20.

    Smith, D., & Smith, M. (2006). Estimation of binary Markov random fields using Markov Chain Monte Carlo. Journal of Computational and Graphical Statistics, 15(1), 207–227.

  21. 21.

    Chandler, D. (1987). Introduction to modern statistical mechanics (1st Ed). USA: Oxford University Press.

Download references

Author information

Correspondence to Philippe Ciuciu.

Additional information

The authors thank the Région Ile de France for funding.



A Properties of our Log-PF Estimate

The first property deals with the asymptotic behavior (β→ ∞) of the log-PF of a symmetric L-color Potts field:

$$\lim\limits_{\beta \rightarrow \infty} \left( \log{Z(\beta)} - \beta c \right) = \log{L} $$

It is quite straightforward to demonstrate that when β→ ∞ only homogeneous configurations of \(\boldsymbol{q}\) have a significant weight in the evaluation of Z(β). In Potts MRFs, such configurations arise whenever all sites are equal to a given label leading first to ∑  kj I(q jk  = q j ) = c and finally to Eq. 15. Applying Eq. 15 to the extrapolation context allows one to derive the following proposition.

Proposition 1

\({\small \lim_{\beta \rightarrow \infty}} \mathcal{A}_\mathcal{T}(\beta,\mathcal{G}_p) = 0\) , so \(\log \widetilde{Z}_\mathcal{T}(\beta,\mathcal{G}_p)\) defined in Eq. 8 provides an asymptotically unbiased estimate of \(\log Z_\mathcal{T}(\beta)\) , \(\forall \mathcal{G}_p\) .


First, applying Eq. 15 to \(\mathcal{G}_p\) and using Eq. 8, we get:

$$ \begin{array}{rll} \lim\limits_{\beta \rightarrow \infty} \frac{c_\mathcal{T}}{c_{\mathcal{G}_p}}\left[ \log Z_{\mathcal{G}_p}(\beta) - \beta c_{\mathcal{G}_p} \right] &=& \frac{c_\mathcal{T}}{c_{\mathcal{G}_p}} \log{L}\\ \Leftrightarrow \lim\limits_{\beta \rightarrow \infty} \bigl[ \frac{c_\mathcal{T}}{c_{\mathcal{G}_p}} \left(\log Z_{\mathcal{G}_p}(\beta) - \log L\right) -\beta c_\mathcal{T}\bigr] &=&0\\ \Leftrightarrow \lim\limits_{\beta \rightarrow \infty} \bigl[ \log \widetilde{Z}_\mathcal{T}(\beta,\mathcal{G}_p) -\beta c_\mathcal{T}\bigr] &=& \log L. \end{array} $$

Applying Eq. 15 to \(\log Z_\mathcal{T}(\beta)\), we obtain \(\lim_{\beta\rightarrow \infty} \bigl[\log Z_\mathcal{T}(\beta) - \log \widetilde{Z}_{\mathcal{G}_p}(\beta,\mathcal{G}_p)\bigr]=0\) \(\forall \mathcal{G}_p\).

The second property gives us the expression of the first-order derivative of the log-PF at β = 0. On the one hand, following [13], it can be shown that \((\log Z(\beta))'=\mathrm{E}\bigl[ U(\boldsymbol{q})\,|\, \beta\bigr]\).

On the other hand, for β = 0, all sites are independent and follow a uniform Bernoulli distribution. Hence, for each clique jk the L-homogeneous configurations (q j , q k ) = (l, l) for \(l\in\left\{1,\,\ldots,\, L\right\}\) contribute to U with the same weight of 1/(2L). We therefore obtain \(\mathrm{E}\bigl[U(\boldsymbol{q}) \,|\, \beta=0\bigr]= \sum_{k \thicksim j} 1/L\). Finally, by equating the two expressions, we get:

$$(\log Z(0))' \triangleq \bigl. {d \log{Z( \beta )}}/{d \beta} \bigr|_{\beta=0} = {c}/{L}\; . $$

From Eq. 8, we get \(\bigl(\log\widetilde{Z}_\mathcal{T}(\beta,\mathcal{G}_p)\bigr)' =\frac{c_\mathcal{T}}{c_{\mathcal{G}_p}} \left(\log Z_{\mathcal{G}_p}(\beta)\right)'\), hence Eq. 16 allows us to derive that \(\forall \mathcal{G}_p\), \((\log \widetilde{Z}_\mathcal{T}(0,\mathcal{G}_p))'\) provides an unbiased estimate of \((\log Z_\mathcal{T}(0))'\).

B Maximal Approximation Error

We give here a sufficient condition involving that the approximation errors \(\mathcal{A}_\mathcal{T}(\beta,\mathcal{G}_p)\) of 2-class Potts fields defined over \({\mathcal{T}}\) and \({\mathcal{G}_p}\) achieve their largest value at β = 0.

Proposition 2

\(\forall \mathcal{G}_p\) , if \((s_\mathcal{T}-1) / c_\mathcal{T} \neq (s_{\mathcal{G}_p}-1)/\) \(c_{\mathcal{G}_p}\)  ( Hyp. 1 ) and \(\mathrm{E}_{\mathcal{T}}\bigl[U(\boldsymbol{q})\,|\,\beta\bigr] / c_\mathcal{T} \neq \mathrm{E}_{\mathcal{G}_p}\bigl[U(\boldsymbol{q})\,|\, \beta\bigr]/\) \( c_{\mathcal{G}_p}\) , ∀ β > 0 ( Hyp. 2 ) then \(\mathcal{A}_\mathcal{T}(0,\mathcal{G}_p)=\max_{\beta\in\mathbb{R}_+} \mathcal{A}_\mathcal{T}(\beta,\mathcal{G}_p)\) , which expression is given by Eq.  10 .


Let \(err_\mathcal{T}(\beta,\mathcal{G}_p)\) be the unnormalized approximation error: \(\mathcal{E}_\mathcal{T}(\beta,\mathcal{G}_p) = ( \log Z_\mathcal{T}(\beta) - \log \widetilde{Z}_{\mathcal{T}}(\beta,\mathcal{G}_p))^2\). We prove that \(\mathcal{E}_\mathcal{T}(0,\mathcal{G}_p) = \max_{\beta \in R_{+}} err_\mathcal{T}(\beta,\mathcal{G}_p)\) by showing that \(\mathcal{E}_\mathcal{T}(\beta,\mathcal{G}_p)\) is a strictly decreasing function on \(\mathbb{R}\)  + :

$$ \begin{array}{lll} \frac{d\mathcal{E}_\mathcal{T}(\beta,\mathcal{G}_p)}{d \beta}\\ \par \quad = L \underbrace{\bigl(\log Z_{\mathcal{T}} (\beta) - \frac{c_{\mathcal{T}}}{c_{\mathcal{G}_p}} (\log Z_{\mathcal{G}_p} (\beta) - \log L) - \log L\bigr)}_{ f_1(\beta)} \\ \quad \times \underbrace{\bigl( \mathrm{E}_{\mathcal{T}}\bigl[U\,|\,\beta\bigr] - \frac{c_{\mathcal{T}}}{c_{\mathcal{G}_p}} \mathrm{E}_{\mathcal{G}_p}\bigl[U\,|\,\beta\bigr]\bigr)}_{ f_2(\beta)}, \end{array}$$

where \(\mathcal{E}_\mathcal{T}(\beta,\mathcal{G}_p)\) is strictly monotonous on \(\mathbb{R}\)  +  if f 1,2(β) ≠ 0, ∀ β > 0. According to Hyp. 2, we directly obtain f 2(β) ≠ 0. Moreover, it is easy to notice that \(f_1(\beta)= \pm \sqrt{\mathcal{E}_\mathcal{T}(\beta,\mathcal{G}_p)}\). Hence, f 1(0) ≠ 0 according to Hyp. 1 and lim β→ ∞  f 1(β) = 0 by definition of \(\mathcal{E}_\mathcal{T}(\beta,\mathcal{G}_p)\). Furthermore, according to the value of (logZ(β))′ and Hyp. 2, we get: f1(β) = f 2(β) ≠ 0, ∀ β > 0 . Function f 1 being continue, its sign is then contant over ℝ +  and then f 1(β) ≠ 0, ∀ β > 0. As a consequence, \(\mathcal{E}_\mathcal{T}(\beta,\mathcal{G}_p)\) is then stricly monotonous for β > 0. Again, according to Hyp. 1, we obtain \(\mathcal{E}_\mathcal{T}(0,\mathcal{G}_p)>0\).

Since by definition \(\lim_{\beta \rightarrow \infty} \mathcal{E}_\mathcal{T}(\beta,\mathcal{G}_p) = 0\), function \(\mathcal{E}_\mathcal{T}(\beta,\mathcal{G}_p)\) is therefore strictly decreasing on \(\mathbb{R}\)  +  and finally \(\mathcal{E}_\mathcal{T}(0,\mathcal{G}_p) = \max_{\beta \in \mathbb{R}_+} \mathcal{E}_\mathcal{T}(\beta,\mathcal{G}_p)\). Since logZ(β) is a strictly increasing function of β, its inverse is strictly decreasing on ℝ + , so \(\mathcal{A}_\mathcal{T}(0,\mathcal{G}_p) = \max_{\beta \in \mathbb{R}_+} \mathcal{A}_\mathcal{T}(\beta,\mathcal{G}_p)\). Note finally that Hyp. 2 is empirically observed on numerous graphs of different size and shape. The only exceptions were found when the topological similarity measure \(\mathcal{L}_\mathcal{T}(\mathcal{G}_p)\) was large.

C Mean Field-based Partition Function Estimates

Following [9], from the mean-field theory we recall here how to derive the zeroth and first order approximations for the partition function with a special concern to the L-color symmetric Potts model.

The negative energy \(U^{\rm MF}(\boldsymbol{w})\) actually reads \(U^{\rm MF}(\boldsymbol{w})= \sum_{i=1}^n w_i^\mathrm{t} \sum_{j\in N_i} \bar{w}_j\) where \(\bar{w}_j=\mathrm{E}^{\rm MF}\bigl[W_j\bigr]\) so that the zeroth order PF approximation based on the the mean field approximation is given by:

$$ Z^{\rm MF}(\beta)=\sum\limits_{\boldsymbol{w}} \exp(\beta U^{\rm MF}(\boldsymbol{w})) $$
$$ = \prod\limits_{i=1}^n \sum\limits_{w_i} \exp(\beta w_i^\mathrm{t} \sum\limits_{j\in N_i} \bar{w}_j) $$

The first order approximation is based upon the Gibbs-Bogoliubov-Feynman [21] inequality that states:

$$ Z(\beta)\geqslant Z^{\rm MF}(\beta) \exp\big(\mathrm{E}^{\rm MF}\bigl[U(W) - U^{\rm MF}(W)\bigr]\big). $$

The mean field model (4) is optimal among models with factorization property in the sense that it maximizes the lower bound in inequality (19) for such models. When considering the Taylor expansion around zero of \(\exp(-\mathrm{E}^{\rm MF}\bigl[U(W) - U^{\rm MF}(W)\bigr])\), the right hand side of inequality (19) denoted by Z GBF(β) in what follows:

$$ Z^{\rm GBF}(\beta) = Z^{\rm MF}(\beta) \exp\big(\mathrm{E}^{\rm MF}\bigl[U(W) - U^{\rm MF}(W)\bigr]\big) $$

can be seen as a first order approximation of Z and so much closer than the zeroth order approximation Z MF(β). From Eq. 17, the first order approximation of Z can be deduced:

$$ Z^{\rm GBF}(\beta) = Z^{\rm MF}(\beta) \exp\Bigg(-\frac{\beta}{2} \sum\limits_{i=1}^n \bar{w}_i \sum\limits_{j\in N_i} \bar{w}_j\Bigg). $$

Rights and permissions

Reprints and Permissions

About this article

Cite this article

Risser, L., Vincent, T., Forbes, F. et al. Min-max Extrapolation Scheme for Fast Estimation of 3D Potts Field Partition Functions. Application to the Joint Detection-Estimation of Brain Activity in fMRI. J Sign Process Syst 65, 325–338 (2011). https://doi.org/10.1007/s11265-010-0505-6

Download citation


  • Markov random field
  • Potts fields
  • Partition function
  • fMRI
  • Bayesian inference
  • MCMC
  • Detection-estimation