Improved Convolutive and Under-Determined Blind Audio Source Separation with MRF Smoothing

Convolutive and under-determined blind audio source separation from noisy recordings is a challenging problem. Several computational strategies have been proposed to address this problem. This study is concerned with several modifications to the expectation-minimization-based algorithm, which iteratively estimates the mixing and source parameters. This strategy assumes that any entry in each source spectrogram is modeled using superimposed Gaussian components, which are mutually and individually independent across frequency and time bins. In our approach, we resolve this issue by considering a locally smooth temporal and frequency structure in the power source spectrograms. Local smoothness is enforced by incorporating a Gibbs prior in the complete data likelihood function, which models the interactions between neighboring spectrogram bins using a Markov random field. Simulations using audio files derived from stereo audio source separation evaluation campaign 2008 demonstrate high efficiency with the proposed improvement.

In a cocktail-party problem, microphones receive noisy mixtures of acoustic signals that propagate along multiple paths from their sources. In a real scenario, the number of audio sources may be greater than the number of microphones, audio sources may have different timbres and similar pitches, and audio signals may be only locally stationary.
A convolutive and under-determined mixing approach needs to be adopted to model this problem. There are several techniques for solving convolutive unmixing problems [13]. Some of these [14] operate in the timedomain by solving the alternative finite impulse response (FIR) inverse model using independent component analysis (ICA) methods [2]. Another method is to extract meaningful features from the time-frequency (TF) representations of mixtures. This approach seems to be more efficient than the ICA-based techniques especially when the number of microphones is lower than the number of sources. Acoustic signals are usually sparse in the TF domain, so the source signals can be separated efficiently even if they are partially overlapped and the problem is under-determined. These features can be extracted using several techniques, including TF masking [15,16], frequency bin-wise clustering with permutation alignment (FBWC-PA) [17,18], subspace projection [19], hidden Markov models (HMM) [20], interaural phase difference (IPD) [21], nonnegative matrix factorization (NMF) [22,23], and nonnegative tensor factorization (NTF) [24].
Nonnegative matrix factorization [25] is a feature extraction method with many real-world applications [26]. A convolutive NMF-based unmixing model was proposed by Smaragdis [22]. Ozerov and Fevotte [23] developed the EM-NMF algorithm, which is suitable for unsupervised convolutive and possibly under-determined unmixing of audio sources using only stereo observations. Their model of the sources was based on the generalized Wiener filtering model [27][28][29], which assumes that each source is locally stationary and that it can be expressed in terms of superimposed amplitude-modulated Gaussian components. Thus, a power spectrogram of each source can be factorized into lower-rank nonnegative matrices, which facilitates the use of NMF for estimating the frequency and temporal profiles of each latent source component. In the TF representation, the latent components are mutually and individually independent across frequency and time bins. However, this assumption is very weak for any adjacent bins because real audio signals have locally smooth frequency and temporal structures.
Motivated by several papers on smoothness [26,28,30,31] in BSS models, we attempt to further improve the EM-NMF algorithm by enforcing local smoothness both in the frequency and temporal profiles of the NMF factors. Similar to [28,30,32], we introduce a priori knowledge to the NMF-based model using a Bayesian framework, although our approach is based on a Gibbs prior with a Markov random field (MRF) model to describe pairwise interactions among adjacent bins in spectrograms. As demonstrated in [33], the MRF model with Green's function, which is well known in many tomographic image reconstruction applications [34], can improve the EM-NMF algorithm. In this paper, we extend the results presented in [33] using other smoothing functions, particularly a more flexible simultaneous autoregressive (SAR) model that is more appropriate in term of hyperparameter estimation and computational complexity.
The rest of this paper is organized as follows. The next section reviews the underlying separation model. Section 3 is concerned with MRF smoothing. The optimization algorithm is described in Sect. 4. Audio source separation experiments are presented in Sect. 5. Finally, the conclusions are provided in Sect. 6.

Model
Let I microphones receive signals that can be modeled as a noisy convolutive mixture of J audio signals. The signal received by the i-th microphone (i ¼ 1; . . .; I) can be expressed as whereã ijl represents the corresponding mixing filter coefficient,s j ðtÞ is the j-th source signal (j ¼ 1; . . .; J),ñ i ðtÞ is the additive noise, and L is the length of the mixing filter.
In the TF domain, the model (1) can be expressed as a ijf s jft þ n ift ; or equivalently; where The noise n ift is assumed to be stationary and spatially uncorrelated, i.e, where R n ¼ diag ½r 2 i À Á and N c ð0; R n Þ is a proper complex Gaussian distribution with a zero-mean and the covariance matrix R n .
Benaroya et al. [27] described an audio sourcesðtÞ as a superimposed amplitude-modulated Gaussian process: where h r (t) is a slowly varying amplitude parameter in the r-th component (r ¼ 1; . . .; R), andw r ðtÞ is a stationary zero-mean Gaussian process with the power spectral density r r 2 (f). The TF representation of (4) leads to The power spectrogram of (5) is given by |s ft | 2 = P r=1 R w fr h rt , where w fr = r r 2 (f). Thus, the spectrogram of the j-th sources j ðtÞ can be factorized as follows: ; R j is the number of latent components in the j-th source, and R þ is the nonnegative orthant of the Euclidean space. The column vectors of W j represent the frequency profiles of the j-th source, while the row vectors of H j are the temporal profiles.
Févotte et al. [28] transformed the model (5) to the following form: and P R r¼1 jc rft j 2 Consequently, the model (2) can be expressed as R ¼ jRj is the number of entries in the set R ¼ S J j¼1 R j , and " A f ¼ ½" a irf 2 C IÂR is created from the columns of the matrix A f . For example, assuming 8j : R j ¼ f; . . .; Rg, we have R ¼ J " R, and " A f ¼ ½A f ; . . .; A f 2 C IÂR is the augmented mixing matrix [23] created from " R matrices A f .
From (8) and (9), we have To estimate the parameters A ¼ ½a ijf 2 C IÂJÂF ; From (3) and (9), we have the joint conditional PDF for X : Based on (12), the log-likelihood term in (11) can be expressed as ln PðX jC; A; R n Þ ¼ À X where c ft ¼ ½c 1ft ; . . .; c Rft T 2 C R ; s ft ¼ ½s 1ft ; . . .; s Jft T 2 C J , and x ft ¼ ½x 1ft ; . . .; x Ift T 2 C I . The joint conditional PDF for C comes from the model (5): jpw fr h rt j À1 exp À jc rft j 2 w fr h rt From (14), we have the log-likelihood functional for C : The negative log-likelihood in (15) is the Itakura-Saito (IS) divergence [35], which is particularly useful for measuring the goodness of fit between spectrograms. The IS divergence is the special case of the b-divergence when b ! À1 [26].
The priors PðWÞ and PðHÞ in (10) can be determined in many ways. Févotte et al. [28] proposed the determination of priors using Markov chains and the inverse Gamma distribution. In our approach, we propose to model the priors with the Gibbs distribution, which is particularly useful for enforcing local smoothness in images.

MRF Smoothing
Let us assume that prior information on the total smoothness of the estimated components W and H is modeled using the following Gibbs distributions: where Z W and Z H are partition functions, a W and a H are regularization parameters, and U(P) is a total energy function, which measures the total roughness in P. The function U(P) is often formulated with respect to the MRF model, which is commonly used in image reconstruction for modeling local smoothness. The functions U(W) and U(H) can be determined for the matrices W and H in the following way: In the first-order interactions (nearest neighborhood), we have S f = {f -1, f ? 1} and the weighting factor m fl = 1, The parameters d W and d H are scaling factors, while w n; d ð Þis a potential function of n that can take different forms. The potential functions that can be applied to the EM-NMF algorithm are listed in Table 1.
According to Lange [41], a robust potential function in the Gibbs prior should have the following properties: nonnegative, even, 0 at n = 0, strictly increasing for n [ 0, unbounded, and convex with bounded first-derivative. Of the functions listed in Table 1, Green's function satisfies all of these properties, and consequently, it was selected for use in the tests in [33]. Unfortunately, the application of Green's function to both matrices W and H demands the determination of two hyperparameters d W and d H , and two penalty parameters a W and a H . Moreover, data-driven hyperparameter estimation usually involves an approximation of the partition functions Z W and Z H , which is not easy in this task.
The Gaussian function w n; d ð Þ ¼ n=d ð Þ 2 , as shown in Table 1, does not have a bounded first-derivative, but its scaling parameter d may be merged with a penalty parameter a. Consequently, only two parameters need to be determined. The MRF model with a Gaussian potential function is actually the SAR model [42][43][44], which is used widely in many scientific fields [44,45] to represent the interactions among spatial data with Gaussian noise. Let w r 2 R F þ be the r-th column of the matrix W, and h r 2 R 1ÂT þ be the r-th row of the matrix H. Random variables in the vectors w r and h r can be modeled using the following stochastic equations: where S ðWÞ 2 R FÂF and S ðHÞ 2 R TÂT are symmetric matrices of spatial dependencies between the random variables, $ N ð0; r 2 IÞ is an i.i.d. Gaussian noise, and I is an identity matrix with a corresponding size. According to [45,46], the spatial dependence matrices can be expressed as S ðWÞ ¼ cZ ðWÞ and S ðHÞ ¼ cZ ðHÞ , where c is a constant that ensures that the matrices C ðWÞ ¼ I F À S ðWÞ and C ðHÞ ¼ I T À S ðHÞ are positive-definite, while nþ1;n ¼ 1 for n 2 f2; . . .; T À 1g, and z mf = z tn = 0 otherwise. In the P-order interactions, each entry w fr and h rt has the corresponding sets of neighbors: {w f-m,r }, {w f+m,r }, {h r,t-m }, {h r,t+m } with m ¼ 1; . . .; P. As a consequence, Z ðWÞ and Z ðHÞ are symmetric band matrices with P sub-diagonals and P super-diagonals, the entries of which are equal to ones, but zeros otherwise. The matrices C ðWÞ and C ðHÞ are positive-definite, if c \ (2P) -1 for P-order interactions [45,46]. We selected c ¼ ð2PÞ À1 À, where is a small constant, for example, In the SAR model, Gibbs priors (16) may be expressed as their joint multivariate Gaussian priors: where , and k f ðC ðWÞ Þ is the f-th eigenvalue of the matrix C ðWÞ . Similarly, and k t ðC ðHÞ Þ ffi 1 À cos pt T À Á , which simplifies the hyperparameter estimation.

Algorithm
The EM algorithm [47] is applied to maximize ln PðX ; C; W; HjA; R n Þ in (11). To calculate the E-step, the log-likelihood functional (13) is transformed to the following form where the correlation matrices are given by R  [47], so the sources s ft and the latent components c ft can be estimated by computing the conditional expectations of the natural statistics. According to [23], we have the following posterior estimates: Similarly, for the latent components, we havê The conditional expectations for the sufficient statistics are as follows: From o ow fr ln PðX ; C; W; HjA; R n Þ ¼ 0, we have Similarly, from o oh rt ln PðX ; C; W; HjA; R n Þ ¼ 0, we get The terms r w fr UðWÞ and r h rt UðWÞ in (30) and (31) take the following forms with respect to the potential functions: • Gaussian (SAR model): • HL function (proposed by Hebert and Leahy [40]): Experiments Experiments were conducted using selected sound recordings taken from the stereo audio source separation evaluation campaign (SiSEC) 1 in 2007. This campaign aimed to evaluate the performance of source separation algorithms using stereo under-determined mixtures. We selected the benchmarks given in Table 2, which included speech recordings (three male voices-male3, and three female voices-female3), three nonpercussive music sources-nodrums, and three music sources that included drums-wdrums. The mixed signals were recordings that lasted 10 s, which were sampled at 16 kHz (the standard settings of recordings from the ''Under-determined speech and music mixtures'' datasets in the Si-SEC2008). For each benchmark, the number of true sources was three (J = 3) but it only had two microphones (I = 2), that is, stereo recordings. Thus, for each case, we faced an under-determined BSS problem. All instantaneous mixtures were obtained using the same mixing matrix with positive coefficients. Synthetic convolutive mixtures were obtained for a meeting room with a 250 ms reverberation time using omnidirectional microphones with 1 m spacing. The spectrograms were obtained by a short-time fourier transform (STFT) using half-overlapping sine windows. To create the spectrograms and recover the time-domain signals from STFT coefficients, we used the corresponding stft_multi and istft_multi Matlab functions from the SiSEC2008 webpage 2 [48]. For instantaneous and convolutive mixtures, the window lengths were set to 1,024 and 2,048 samples, respectively.
The EM-NMF algorithm was taken from Ozerov's homepage 3 , while the MRF-EM-NMF algorithm was coded and extensively tested by Ochal [49].
The proposed algorithm is based on an alternating optimization scheme, which is intrinsically non-convex, and hence, its initialization plays an important role. An incorrect initialization may result in slow convergence and early stagnation at an unfavorable local minimum of the objective function. As done in many NMF algorithms, the factors W and H are initialized with uniformly distributed random numbers, whereas the entries in the matrix A are drawn from a zero-mean complex Gaussian distribution. After W and H have been initialized, the covariance matrices R ðsÞ ft and R ðcÞ ft given by (8) can be computed. A noise covariance matrix R n is needed to update the E-step. Ozerov and Fevotte [23] tested several techniques for determining this matrix. The E-step in MRF-EM-NMF is identical to that in EM-NMF [23], and hence, all of these techniques can be used in this experiment. The initial matrix R n was determined based on the empirical variance of the observed power spectrograms.
The MRF-EM-NMF and EM-NMF algorithms were initialized using the same random values (given as " R) and run for 1,500 iterations.
The choice of the parameters {a W , a H , c W , c H } used in the Gibbs distributions also affected the performance. The regularization parameters can be fixed or changed with iterations. Motivated by iterative thresholding strategies [26], we used the following strategies: • Linear thresholding: • Nonlinear thresholding: • Fixed thresholding: where k is the current iteration, k max is the maximum number of iterations, s 2 ð0; 1Þ is the shape parameter, m 2 ð0; 1Þ is the shift parameter, k 1 is the threshold, and a can be equal to a W or a H . All of the above thresholding strategies aim to relax smoothing during the early iterations when the descent directions in the updates are sufficiently steep and to emphasize smoothing if noisy perturbations become significantly detrimental to the overall smoothness. These strategies are motivated by standard regularization rules that apply to ill-posed problems. We tested all of the thresholding strategies using instantaneous and convolutive mixtures, and we obtained the best performance with fixed thresholding using k 1 = k max /2.
The parameters d W and d H in the MRF models can be estimated using standard marginalization procedures or by maximizing the Type II ML estimate for (10). However, these techniques have a huge computational cost for the nonlinear potential functions in the MRF models. For practical reasons, they are not very useful for the GR or HR functions.
In this study, we tested all of the benchmarks in Table 2 Table 3.
The separation results were evaluated in terms of the signal-to-distortion ratio (SDR) and the signal-to-interference ratio (SIR) [50]. Figure 1 shows the SDRs and SIRs averaged for the sources, which were estimated using the EM-NMF and MRF-EM-NMF with various smoothing functions based on instantaneous and convolutive mixing models. For each sample in Table 2 and each smoothing function, the smoothing parameters were tuned optimally for a given fixed initializer. This unsupervised learning approach evaluated the efficiency of the smoothing functions with respect to a given recording scenario. However, the smoothing parameters need to be determined with a supervised learning framework in practice. To test this option, each recording in Table 2 was divided into two 5 s excerpts during the training and testing stages. For each training excerpt, the smoothing parameters and initializer were selected to maximize the SDR performance. Testing was performed on the other excerpt with the same initializer. The results obtained during the testing stage with the instantaneous mixtures are shown in Fig. 2.
For comparison, Table 4 shows the average SDR results produced and the running time taken when using several state-of-the-art algorithms, which were applied to the mixtures in Table 2. The generalized Gaussian prior (GGP) algorithm [51] and the statistically sparse decomposition principle (SSDP) algorithms [52] were applied to the instantaneous mixtures. The convolutive mixtures were unmixed with the IPD [21], two versions of the FBWC-PA  Fig. 1 Source separation results obtained with the MRF-EM-NMF (first-and second-order Gaussian, GR, and HL functions) and EM-NMF (no smoothing) algorithms after 1,500 iterations: a mean SDR (dB) for instantaneous mixture, b mean SDR (dB) for convolutive mixture, c mean SIR (dB) for instantaneous mixture, d mean SIR (dB) for convolutive mixture. The smoothing parameters were tuned separately for each mixture in Table 2 [17,18] algorithm, and the Convolutive NMF [22]. Note that the last method in this list was based on supervised learning, whereas the others were unsupervised learning algorithms. In this case, the first 8 s excerpts of the 10 s source recordings were used for learning, while the remainder was used for testing. The averaged elapsed time measured using Matlab 2008a for 1,500 iterations with " R ¼ 12, executed on a 64bit Intel Quad Core CPU 3 GHz with 8 GB RAM was almost the same for the MRF-EM-NMF and EM-NMF algorithms (see Table 4).
The simulations demonstrate that MRF smoothing improved the source separation results in almost all test cases. The results confirmed that instantaneous mixtures were considerably easier to separate than convolutive ones. The MRF-EM-NMF algorithm delivered the best mean SDR performance of all the algorithms tested with instantaneous mixtures. The highest SDR values were produced with instantaneously mixed non-percussive music sources. This was justified by the smooth frequency and temporal structures of non-percussive music spectrograms. If the source spectrograms were not very smooth (as with the percussive audio recordings), MRF smoothing gave only a slight improvement (see Figs. 1, 2) in the first-order MRF interactions, and even a slight deterioration in the higher-order MRF interactions. According to Fig. 1, the HL function delivered the most promising SDR results, which were stable with a wide range of parameters. In each case with the instantaneous mixtures, the best results were produced with the same hyperparameter values, d W = 1 and d H = 10, and almost the same penalty parameter values, a W and a H . The SAR model also improved the results compared with the standard EM-NMF algorithm. Moreover, the SAR model was tuned using only two penalty parameters, and the partition function of the associated Gibbs prior could be derived using a closed-form expression, which might be very useful for data-driven hyperparameter estimation.  The source separation results produced with the MRF-EM-NMF algorithm for convolutive and under-determined mixtures were better than those obtained with the EM-NMF algorithm. Unfortunately, the SDR values showed that these results were still a long way from being perfect, even after 1,500 iterations, and thus, further research is needed in this field. It is likely that some additional prior information could be imposed, especially on a mixing operator, which might increase the efficiency considerably.
It should be noted that the SDR performance with both mixtures could still be improved by refining the associated parameters, especially in the MRF models, and by using more efficient initializers.

Conclusions
This study demonstrated that imposing MRF smoothing on the power spectrograms of audio sources estimated from under-determined unmixing problems may improve the quality of estimated audio sounds considerably. This was justified because any type of meaningful prior information improves the performance, especially with under-determined problems. This study addressed the application of MRF smoothing in the EM-NMF algorithm, but this type of smoothing could be applied to many other related BSS algorithms based on feature extraction from power spectrograms. Thus, the theoretical results presented in this paper may have broad practical applications. Clearly, further studies are needed to improve this technique for convolutive mixtures and to integrate regularization parameter estimation techniques in the main algorithm.