Abstract
It is vital to identify Mild Cognitive Impairment (MCI) subjects who will progress to Alzheimerâ€™s Disease (AD), so that early treatment can be administered. Recent studies show that using complementary information from multimodality data may improve the model performance of the above prediction problem. However, multimodality data is often incomplete, causing the prediction models that rely on complete data unusable. One way to deal with this issue is by first imputing the missing values, and then building a classifier based on the completed data. This twostep approach, however, may generate nonoptimal classifier output, as the errors of the imputation may propagate to the classifier during training. To address this issue, we propose a unified framework that jointly performs feature selection, data denoising, missing values imputation, and classifier learning. To this end, we use a lowrank constraint to impute the missing values and denoise the data simultaneously, while using a regression model for feature selection and classification. The feature weights learned by the regression model are integrated into the low rank formulation to focus on discriminative features when denoising and imputing data, while the resulting lowrank matrix is used for classifier learning. These two components interact and correct each other iteratively using Alternating Direction Method of Multiplier (ADMM). The experimental results using incomplete multimodality ADNI dataset shows that our proposed method outperforms other comparison methods.
This work was supported in part by NIH grants AG053867, EB008374, AG041721, AG049371, and AG042599.
You have full access to this open access chapter, Download conference paper PDF
Similar content being viewed by others
1 Introduction
Alzheimerâ€™s Disease (AD) is the most common type of dementia, where the brain neurons are degenerated progressively, causing the affected patients to gradually lose memory, cognitive and motor abilities. As AD is irreversible and has caused enormous economic and social burden to the community, it is therefore vital to detect its prodromal stage, called Mild Cognitive Impairment (MCI) as early as possible, so that the patients can be treated to potentially slow down or stop the disease progression. A lot of AD biomarkers have been developed, including measurements derived from neuroimaging data (i.e., magnetic resonance imaging (MRI) and flurodeoxglucose positron emission tomography (FDGPET)), and from biological data like the cerebruspinal fluid (CSF). Recent studies have shown that the complementary information from different data modality can improve the accuracy of the multimodality based AD prediction model [14]. Unfortunately, samples with complete multimodality data are limited, e.g., in Alzheimerâ€™s Disease Neuroimaging Initiative (ADNI) dataset, only about \(\frac{1}{4}\) of its total samples contains complete MRI, PET and CSF data at baseline.
The easiest way to deal with missing data perhaps is by discarding the samples with missing values in any of their modalities. Though convenient, this approach discards a huge amount of useful information and leaves a smaller subset of data for analysis. To use all the samples for analysis, we can impute the missing data [4, 6, 11] based on the available information from other samples, and subsequently perform classification based on the completed dataset. However, as shown in various studies [8, 10, 13], most imputation methods are not accurate for blockwise missing data as in our case, and this error may propagate to the subsequent classifier and cause unstable performance. In addition, as this is a 2step approach, there is no way for the classification step to feedback to the imputation step for focusing on discriminative features.
To address the above issues, we propose a novel diagnostic model (using incomplete multimodality data) that simultaneously imputes the missing values (while also denoises the data) and learns a classifier (while also selects discriminative features). To this end, we assume our data is lowrank (i.e., similar to previous works in [1, 4, 5, 9, 10]) and incorporate a lowrank matrix completion [4] algorithm to impute the missing values, while also denoising the features matrix. Furthermore, we use linear classification model to learn the mapping between the denoised feature matrix and their corresponding labels. These two processes are optimized intertwinely in a single optimization objective, so that they can correct each other to obtain a more robust prediction model. For instance, the missing values are imputed based not only on the peer samples, but also in a way that they can be classified properly, while the classifier weights will be corrected based on the denoised and imputed data. In addition, we also regularize the weight vector of the classifier (e.g., \(\ell _1\) or \(\ell _2\) norm regularization), to control how specific features contribute in building the classifier [1, 13].
2 Method
2.1 Notation
Let \(\mathbf {X} \in \mathbb R^{N\times d} \) denotes the feature matrix of N subjects (i.e., samples), each containing ddimensional features from MRI, PET and clinical score data. As not all the subjects have complete multimodality data, this matrix is incomplete, i.e., feature values of some modalities from some subjects are missing. We use \(\varOmega \) to denote the index set of known (or observed) values, and \(\bar{\varOmega }\) to denote its complement, i.e.,Â index set of the missing values. The corresponding target output is given as \(\mathbf {Y} \in \{1, 1\}^{N\times c}\), where c is the number of target outputs, i.e.,Â 1 in our case for pMCI/sMCI classification. We use \((\mathbf {x}_i\in \mathbb R^{1\times d},\mathbf {y}_i\in \mathbb R^{1\times c})\) to denote the featuretarget pair for the ith sample.
2.2 Preliminaries

1.
Lowrank matrix completion (LRMC) has been proposed to recover missing data from a limited number of samples. Assuming a noisefree incomplete data \(\mathbf {X}\), its formulation is given as \( \{\min _{\mathbf {Z}} \Vert \mathbf {Z} \Vert _*, ~\text {s.t.}~\mathcal {P}_{\varOmega }(\mathbf {Z}) = \mathcal {P}_{\varOmega }(\mathbf {X})\}, \) where \(\mathbf {Z}\) is the completed version of \(\mathbf {X}\), and \(\mathcal {P}\) is the orthogonal projection so that the (i,Â j)th element of \(\mathcal {P}_{\varOmega }(\mathbf {Z})\) is equal to \(\mathbf {Z}_{ij}\) if \((i,j)\in \varOmega \) and zero otherwise. In the presence of noise, we relax the equality constraint, and modify the optimization problem to [4]
$$\begin{aligned}&\min _{\mathbf {Z}} \Vert \mathbf {Z} \Vert _*+ \lambda \Vert \mathcal {P}_{\varOmega }(\mathbf {Z})  \mathcal {P}_{\varOmega }(\mathbf {X})\Vert _F^2, \end{aligned}$$(1)where \(\Vert \cdot \Vert _F\) denotes the Frobenius norm, and \(\lambda \) is a positive tradeoff parameter that can be determined by the noise level in the data.

2.
Robust principal component analysis (RPCA) assumes that the noisy data can be decomposed into two components â€“ a lowrank component \(\mathbf {Z}\), which represents the clean data, and the error component \(\mathbf {E}\), which represents the data noise. Its formulation is given as \(\{ \min _{\mathbf {Z},\mathbf {E}} \Vert \mathbf {Z} \Vert _*+ \lambda \Vert E\Vert _1, ~\text {s.t.}~\mathbf {X} = \mathbf {Z}+\mathbf {E}\}, \) where \(\Vert \cdot \Vert _1\) denotes the \(\ell _1\)norm, assuming sparse noise in \(\mathbf {X}\). With the presence of missing data, this formulation can be rewritten as [7]
$$\begin{aligned} \min _{\mathbf {Z},\mathbf {E}} \Vert \mathbf {Z} \Vert _*+ \lambda \Vert \mathcal {P}_{\varOmega }(\mathbf {E})\Vert _1, ~\text {s.t.}~\mathcal {P}_{\varOmega }(\mathbf {X}) = \mathcal {P}_{\varOmega }(\mathbf {Z}+\mathbf {E}). \end{aligned}$$(2)Note that if we use Frobenius norm for the error matrix (i.e.,Â assuming Gaussian noise in \(\mathbf {X}\)), and substitute \(\mathbf {E} = \mathbf {Z}\mathbf {X}\) in Eq. (2), it will be equivalent to Eq. (1). Thus, RPCA with missing data can be seen as a matrix completion problem, with more robustness to the data noise, as it explicitly models the noise as an error term. Without loss of generality, we can assume \(\mathcal {P}_{\bar{\varOmega }}(\mathbf {X})=\mathbf {0}\), and make \(\mathcal {P}_{\bar{\varOmega }}(\mathbf {E})\) to be any value that satisfies \(\mathcal {P}_{\bar{\varOmega }}(\mathbf {X})= \mathcal {P}_{\bar{\varOmega }}(\mathbf {Z}+\mathbf {E})\). This will simplify Eq. (2) to an easier problem as
$$\begin{aligned} \min _{\mathbf {Z},\mathbf {E}} \Vert \mathbf {Z} \Vert _*+ \lambda \Vert \mathcal {P}_{\varOmega }(\mathbf {E})\Vert _1, ~\text {s.t.}~\mathbf {X} = \mathbf {Z}+\mathbf {E}, \end{aligned}$$(3)where we call this formulation as incomplete data version of RPCA (IRPCA).

3.
Classifier can be trained to map data sample \(\mathbf {x}_i\) to the target output \(\mathbf {y}_i\) by learning a coefficient matrix \(\mathbf {W}\in \mathbb R^{dxc}\). The general formulation for a classifier (e.g., linear regression model) is given as
$$\begin{aligned} \min _{\mathbf {W}} L(\mathbf {X},\mathbf {Y},\mathbf {W}) + \text {Reg}(\mathbf {W}), \end{aligned}$$(4)where \(L(\cdot )\) is the classifier loss function, and \(\text {Reg}(\cdot )\) is the regularizer for \(\mathbf {W}\), e.g., \(\ell _{2,1}\)norm for joint sparse feature learning. We use least square loss function in this study, i.e., \(L(\cdot ) = \Vert \mathbf {Y}\mathbf {X} \mathbf {W}\Vert _F^2\). This classification formulation requires all values in X to be known, and becomes unusable if X is incomplete. Thus, we propose to combine Eqs.Â (3) and (4) to impute the missing values in the feature matrix, denoise the data, and learn the classifier jointly.
2.3 Proposed Method
Given an incomplete input matrix \(\mathbf {X}\) and its corresponding target matrix \(\mathbf {Y}\), we propose to concurrently impute the missing values in X and learn a classifier coefficient matrix \(\mathbf {W}\) based on the completed data. More specifically, we employ the IRPCA formulation (i.e., Eq.Â (3)) to decompose \(\mathbf {X}\) into lowrank and error components, and learn the classifier based on the lowrank denoised data. We call our method â€śJoint Robust Imputation and Classification (JRIC)â€ť. FigureÂ 1 shows the overview of our method. Note that \(\mathbf {X} = [\mathbf {X}_{tr};\mathbf {X}_{te}]\) is the concatenation of training input data \(\mathbf {X}_{tr}\) and testing input data \(\mathbf {X}_{te}\). \(\mathbf {X}\) could be incomplete, as shown by the white boxes in \(\mathbf {X}\), and after applying IRPCA, it will be transformed into \(\mathbf {Z} = [\mathbf {Z}_{tr};\mathbf {Z}_{te}]\), where the data are denoised and the missing feature values are imputed. Then, we train a classifier using (\(\mathbf {Z}_{tr}\), \(\mathbf {Y}_{tr}\)) by learning a classifier weight \(\mathbf {W}\), which could be sparse. Besides, we also feedback \(\mathbf {W}\) to IRPCA to focus on reducing the reconstruction error of discriminative features, while relaxing the reconstruction error of redundant or noisy features. Note that the above learning are formulated in an unified framework, so that each component of the formulation can correct each other iteratively until the algorithm converges. Our proposed JRIC formulation is given as:
where \(\mu , \lambda _1\), \(\lambda _2\) are the regularization parameters. The first two terms in (5) together with the constraint term compose the IRPCA component (i.e., Eq. (3)), while the last two terms compose the classifier component (i.e., Eq. (4)). More specifically, the first term is a nuclear norm, which encourages \(\mathbf {Z}\) to be low rank, assuming that the â€ścleanâ€ť data is low rank. The second term is the reconstruction error term, to ensure that the low rank matrix \(\mathbf {Z}\) not too much differs from the original matrix \(\mathbf {X}\). The third term is the classifier loss function, which could be linear regression loss, logistic loss, hinge loss, etc., to learn a classifier weight \(\mathbf {W}\) (which is a vector if there is only one column in \(\mathbf {Y}\), and a matrix otherwise). The fourth term is the regularizer of \(\mathbf {W}\), to ensure that classifier is not overtrained, and to select discriminative features if sparse constraint (e.g., \(l_1\) or \(l_{21}\)norm) is used. Note that we have used \(\varOmega _\mathbf {W}\) instead of \(\varOmega \) in the second term of Eq.Â (5) to include the information from \(\mathbf {W}\) when computing the reconstruction loss. We define \(\varOmega _\mathbf {W}\) as the index set of discriminative nonmissing feature values in \(\mathbf {X}\), where the discriminative features are determined via detecting nonzero rows inÂ \(\mathbf {W}\).
We use Alternating Direction Method of Multiplier (ADMM) [2] to solve (5). In ADMM, a complex optimization problem is simplified by introducing some auxiliary variables, so that it can be decomposed into several smaller convex optimization problems that can be solved efficiently. Specifically, we introduce an auxiliary variable J:
The augmented Lagrangian function for (6) is given as:
where \(\mathbf {U}_1\) and \(\mathbf {U}_2\) are the Lagrangian multipliers, and \(\rho \) is a tradeoff parameter, controlling the rate of convergence. We then solve Eq. (7) by solving the following optimization subproblems iteratively, until one of the convergence criteria is met.

1.
Update \(\mathbf {J}\):
$$\begin{aligned} \mathbf {J}^{k}= \arg \min _{\mathbf {J}} \lambda _1 \Vert \mathbf {J} \Vert _* + \frac{\rho }{2} \left( \Vert (\mathbf {Z} + \mathbf {U}_2)  \mathbf {J}\Vert _F^2\right) . \end{aligned}$$(8)The solution for this problem is given by singular value thresholding shrinkage operator [3], \(\mathcal {S}_{\frac{\lambda _1}{\rho }}(\mathbf {Z}+\mathbf {U}_2) = \mathbf {G}\mathcal {R}_{\frac{\lambda _1}{\rho }}(\varvec{\varSigma })\mathbf {H}^T\), where \(\mathbf {G}\varvec{\varSigma }\mathbf {H}^T\) is the singular value decomposition (SVD) of \((\mathbf {Z} + \mathbf {U}_2)\), and \(\mathcal {R}_{\tau }(\cdot )\) is a shrinkage operator defined as \(\mathcal {R}_{\tau }(x) = \text {sign}(x)\text{ max }(x\tau ,0)\).

2.
Update \(\mathbf {Z}\):
$$\begin{aligned} \mathbf {Z}^{k} = \arg \min _{\mathbf {Z}} L(\mathbf {Z}_{tr},\mathbf {Y}_{tr},\mathbf {W}) + \frac{\rho }{2} \Vert \mathbf {Z}(\mathbf {X}\mathbf {E} + \mathbf {U}_1)\Vert _F^2 + \frac{\rho }{2} \Vert \mathbf {Z}(\mathbf {J}\mathbf {U}_2)\Vert _F^2, \end{aligned}$$(9)where we define \(L(\mathbf {Z}_{tr},\mathbf {Y}_{tr},\mathbf {W})=\Vert \mathbf {Y}_{tr}\mathbf {Z}_{tr}\mathbf {W}\Vert _F^2\) as a linear regression loss function. We solve \(\mathbf {Z}_{tr}\) and \(\mathbf {Z}_{te}\) separately for this subproblem. To solve for \(\mathbf {Z}_{te}\), the first term in Eq.Â (9) is irrelevant, and thus its solution depends only to the other two terms in Eq.Â (9). Let \(\mathbf {S} = \mathbf {X}\mathbf {E} + \mathbf {U}_1+\mathbf {J}\mathbf {U}_2\), then it is easy to show that the solution for \(\mathbf {Z}_{te}\) is given as \(\mathbf {S}_{te}\). For \(\mathbf {Z}_{tr}\), the closedform solution is given as \(\left( \mathbf {Y}_{tr}\mathbf {W}^T + \frac{\rho }{2} (\mathbf {S}_{tr})\right) (\mathbf {W}\mathbf {W}^T+\rho \mathbf {I})^{1}\), where \(\mathbf {I}\) is the identity matrix. Then, \(\mathbf {Z}^k = [\mathbf {Z}_{tr} ; \mathbf {Z}_{te}]\). Note that, other classifier loss functions can be used, and as long as the classifier loss function is differentiable, this problem can always be solved using subgradient descent method.

3.
Update \(\mathbf {W}\):
$$\begin{aligned} \mathbf {W}^{k} = \arg \min _{\mathbf {W}} L(\mathbf {Z}_{tr},\mathbf {Y}_{tr},\mathbf {W}) + \text {Reg}(\mathbf {W}), \end{aligned}$$(10)where this problem can be solved using the current classifier solver.

4.
Update \(\varOmega _\mathbf {W}\): Remove indices in \(\varOmega \) corresponding to zerovalue rows in \(\mathbf {W}\).

5.
Update \(\mathbf {E}\):
$$\begin{aligned} \mathbf {E}^{k} = \arg \min _{\mathbf {E}} \lambda _2\Vert \mathcal {P}_{\varOmega _{\mathbf {W}}}(\mathbf {E})\Vert _p + \frac{\rho }{2} \Vert (\mathbf {X}\mathbf {Z} + \mathbf {U}_1)  \mathbf {E}\Vert _F^2. \end{aligned}$$(11)When \(p=1\), we can use a shrinkage operator to solve for \(\mathbf {E}\), given as \(\mathcal {R}_{\frac{\lambda _2}{\rho }}(\mathbf {X}\mathbf {Z} + \mathbf {U}_1) \), for \(\varOmega _\mathbf {W}\) locations of \(\mathbf {E}\). For \(\bar{\varOmega }_\mathbf {W}\) locations of \(\mathbf {E}\), the solution is given as \(\mathbf {X}\mathbf {Z} + \mathbf {U}_1\).

6.
Update \(\mathbf {U}_1,\mathbf {U}_2\) : \(\mathbf {U}_1^{k}= \mathbf {U}_1 + \mathbf {X}  \mathbf {Z} + \mathbf {E}, ~~\mathbf {U}_2^{k} = \mathbf {U}_2 + \mathbf {Z}  \mathbf {J} \).

7.
Stopping criteria: Step 1 to 5 above are iterated until a convergence condition is achieved, e.g., when the changes of \(\mathbf {Z}\) are negligible.
We summarize our algorithm in AlgorithmÂ 1. After the training we will obtain the denoised and imputed testing data and also the classifier weight \(\mathbf {W}\). The prediction for the testing data is thus given by \(\text {sign}(\mathbf {Z}_{te}.\mathbf {W})\)
3 Experiment
3.1 Data
In this study, we use multimodality data (i.e., MRI, PET and clinical scores) from ADNI baseline dataset (http://adni.loni.ucla.edu). Only subjects that were categorized as MCI at baseline are used in this study, and we define progressive MCI (pMCI) subjects as MCI subjects that will progress to AD within 2 years, and define stable MCI (sMCI) subjects as otherwise. Based on this definition, we have 124 pMCI and 118 sMCI subjects. Each MRI image is processed using the following steps: ACPC alignment, N3 intensity inhomogeneity correction, skull stripping, tissue segmentation, registration to a template with 93 ROIs [12], and used the normalized Gray matter (GM) volumes from the 93 ROIs as MRI features. We also affinely aligned each PET image to its corresponding MRI image, and used the mean ROI intensity values as a PET features. Besides, clinical scores (e.g., ADAS, CDR, MMSE, etc.) are also used in this study.
3.2 Experimental Results and Discussions
We compare our method with 2step imputationbased classification methods, i.e., we first use IRPCA to impute and denoise the data, and then use sparse leastsquared regression (IRPCAsparse), and linear SVM (IRPCASVM) to classify the data. Besides, we also compare our method with two stateoftheart methods that were designed for incomplete multimodality dataset, i.e., lowrank matrix completion method (LRMC) [10], and incomplete data sparse feature learning (iMSF) method (that uses leastsquared loss function) [13]. Besides, we conduct our experiments using different modality combinations of MRI, PET and clinical scores (Cli), to show the performance of each method for each modality combination. For more robust comparison, we also conduct our experiments using 10 repetitions of 10fold cross validations and report the average accuracies as the performance measures. The hyperparameters of all the methods are determined via nested cross validation using the training data. The classification results are shown in Fig.Â 2.
From Fig.Â 2, it can be seen that the proposed method JRIC outperforms other comparison methods for most of the modality combinations, i.e., MRI+PET, MRI+Cli and MRI+PET+Cli. The classification performance of our method using single modality (i.e., MRI) is comparable with IRPCASVM and LRMC. This is probably because we use a simple least square loss function to train our classifier, while IRPCASVM and LRMC use more advanced loss functions, i.e., hinge loss and logistic loss functions, respectively. This indicates that if the data is complete, as in single modality case, we should use more advanced classifiers to get the better classification performance. Nevertheless, comparing the results of IRPCAsparse (which trains classifier using the denoised data) and iMSF (which trains classifier using the original data) also reveals that using the denoised data may have better chance of getting better classification accuracy.
The advantage of using our proposed method becomes more significant when using multimodality data, especially when some data is missing, e.g., when using MRI+PET data. This is probably due to the use of intertwining the learning of IRPCA and sparse classifier in our proposed method, which enables both components to correct each other for better classifier performance. Besides, the feedback from the classifier also enables the IRPCA to have more lowrank smoothing on redundant and nondiscriminative features, and lower reconstruction error for discriminative features. This will prevent the IRPCA from over smoothing (denoising) the data via lowrank constraint which may cause the loss of important information from discriminative features.
4 Conclusion
In this paper, we introduce a robust classifier for dementia diagnostic problem using incomplete multimodality data. Our proposed method JRIC jointly imputes the missing value, denoises the data and trains a classifier. We formulate our proposed framework in a general way, so that it can be adapted to different types of classifier easily. For fast implementation, we show case our proposed framework using sparse classifier with leastsquared loss function. Our experimental results show that our proposed methods outperforms other comparison methods, implying the benefit of iterative learning of matrix completion and classification.
References
AdeliMosabbeb, E., Thung, K.H., An, L., Shi, F., Shen, D.: Robust featuresample linear discriminant analysis for brain disorders diagnosis. In: NIPS (2015)
Boyd, S.P., Vandenberghe, L.: Convex Optimization. Cambridge University Press (2004)
Cai, J.F., CandĂ¨s, E.J., Shen, Z.: A singular value thresholding algorithm for matrix completion. SIAM J. Optim. 20(4), 1956â€“1982 (2010)
CandĂ¨s, E.J., Recht, B.: Exact matrix completion via convex optimization. Found. Comput. Math. 9(6), 717â€“772 (2009)
Goldberg, A., Zhu, X., et al.: Transduction with matrix completion: three birds with one stone. Adv. Neural Inf. Process. Syst. 23, 757â€“765 (2010)
Schneider, T.: Analysis of incomplete climate data: estimation of mean values and covariance matrices and imputation of missing values. J. Clim. 14(5), 853â€“871 (2001)
Shang, F., Liu, Y., et al.: Robust principal component analysis with missing data. In: Conference on Information and Knowledge Management, pp. 1149â€“1158. ACM (2014)
Thung, K.H., Wee, C.Y., Yap, P.T., Shen, D.: Identification of progressive mild cognitive impairment patients using incomplete longitudinal MRI scans. Brain Struct. Funct. 221(8), 3979â€“3995 (2016)
Thung, K.H., Yap, P.T., et al.: Conversion and timetoconversion predictions of mild cognitive impairment using lowrank affinity pursuit denoising and matrix completion. Med. Image Anal. 45, 68â€“82 (2018)
Thung, K.H., et al.: Neurodegenerative disease diagnosis using incomplete multimodality data via matrix shrinkage and completion. Neuroimage 91, 386â€“400 (2014)
Troyanskaya, O., Cantor, M., et al.: Missing value estimation methods for DNA microarrays. Bioinformatics 17(6), 520â€“525 (2001)
Wang, Y., Nie, J., Yap, P.T., Shi, F., Guo, L., Shen, D.: Robust deformablesurfacebased skullstripping for largescale studies. In: Fichtinger, G., Martel, A., Peters, T. (eds.) MICCAI 2011. LNCS, vol. 6893, pp. 635â€“642. Springer, Heidelberg (2011). https://doi.org/10.1007/9783642236266_78
Yuan, L., Wang, Y., et al.: Multisource feature learning for joint analysis of incomplete multiple heterogeneous neuroimaging data. Neuroimage 61(3), 622â€“632 (2012)
Zhang, D., Shen, D.: Multimodal multitask learning for joint prediction of multiple regression and classification variables in Alzheimerâ€™s disease. Neuroimage 59(2), 895â€“907 (2012)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
Â© 2018 Springer Nature Switzerland AG
About this paper
Cite this paper
Thung, KH., Yap, PT., Shen, D. (2018). Joint Robust Imputation and Classification for Early Dementia Detection Using Incomplete Multimodality Data. In: Rekik, I., Unal, G., Adeli, E., Park, S. (eds) PRedictive Intelligence in MEdicine. PRIME 2018. Lecture Notes in Computer Science(), vol 11121. Springer, Cham. https://doi.org/10.1007/9783030003203_7
Download citation
DOI: https://doi.org/10.1007/9783030003203_7
Published:
Publisher Name: Springer, Cham
Print ISBN: 9783030003197
Online ISBN: 9783030003203
eBook Packages: Computer ScienceComputer Science (R0)