Abstract
Ordinal classifier cascades are constrained by a hypothesised order of the semantic class labels of a dataset. This order determines the overall structure of the decision regions in feature space. Assuming the correct order on these class labels will allow a high generalisation performance, while an incorrect one will lead to diminished results. In this way ordinal classifier systems can facilitate explorative data analysis allowing to screen for potential candidate orders of the class labels. Previously, we have shown that screening is possible for total orders of all class labels. However, as datasets might comprise samples of ordinal as well as nonordinal classes, the assumption of a total ordering might be not appropriate. An analysis of subsets of classes is required to detect such hidden ordinal substructures. In this work, we devise a novel screening procedure for exhaustive evaluations of all order permutations of all subsets of classes by bounding the number of enumerations we have to examine. Experiments with multiclass data from diverse applications revealed ordinal substructures that generate new and support known relations.
Introduction
Extensive data collections are considered valuable resources for hypothesis generation as well as theory building and confirmation. Providing samples of major concepts or categories, they can be seen as the foundation of datadriven learning and reasoning. Classical machine learning techniques focus on the discrimination of individual concepts. Depending on the chosen model type, they allow an interpretation of the underlying discrimination rule and the generation of hypotheses on its intrinsic characteristics [10, 27, 45]. For example, features with a high impact on a decision boundary can be reported when screening for potential causes of class differences [21, 34, 36]. Dense regions can be extracted when the definition of prototypic cases is of interest [16]. Combined with external domain knowledge classification models can also give hints to higherlevel processes involved [35, 47].
Hypotheses on the relations of the categories are only rarely provided [9, 22, 52]. Nevertheless, they have tremendous explanatory potential as they link different concepts collected for a specific subject. This information is complementary to the intrinsic characteristics mentioned above. Together they allow a holistic overview of the matter. One type of classifier that utilises interclass relations are ordinal classifiers [11]. An ordinal classifier relies on an ordinal relationship of the kind \(a \prec b \prec c\), which is assumed for the semantic concepts (class labels) of a dataset but not guaranteed for its feature representation. The assumed semantic ordering guides the training process of an ordinal classifier. If it is reflected in the feature space, the ordinal classifier can achieve high sensitivities. Otherwise, it will show a diminished performance [33].
A dependency on a predefined class order can be established in different ways. The overall structure, as well as the training algorithm of a classification model, can be modified. The first one is most prominently used for ordinal adaptations of hierarchical multiclass architectures based on sequences of binary classifiers [40]. The order of these sequences can be designed to reflect assumed class orders. Examples can be ordinal classifier cascades [18], which are modifications of general decision lists [45], ordinal versions [23] of nested dichotomies [19] or systems based on directed acyclic graphs [44]. More specific modifications can be applied to different types of base classifiers [11, 12]. A training algorithm can be adapted for ordinal classification by utilising specific performance measures [50] or costsensitive base classifiers [31, 39].
Here, we now do not assume any order on the class labels but instead, screen for potential candidate orders on the semantic level. Nevertheless, the assumption of a total order of classes might not be appropriate in the presence of nonordinal categories. We propose a novel screening procedure for generating hypotheses on ordinal relations among subsets of these categories. It extensively replaces de novo modelling by memoisation techniques bringing exhaustive evaluation into range. We utilise ordinal classifier cascades to screen through all class combinations and highlight those subcascades that achieve a predefined minimal classwise sensitivity which is more difficult for longer subcascades. We, therefore, focus on those that cover as many classes as possible. We found that accurate subcascades can also result in a compact assignment of the remaining categories and conclude that they hence allow for hypothesis generation on these remaining nonordinal classes.
The remaining article is organised as follows. Section 2 provides the underlying notation and concepts of ordinal classification. In Sect. 3, we describe our screening method. We outline the design of our experiments and results in Sects. 4 and 5 and discuss them in Sect. 6.
Methods
The basic building block of our screening experiments is the training and evaluation of classification models, the classifiers. In its basic version, a classifier is a function designed for categorising an object into one out of \(\mathcal {Y}\) predefined and fixed classes \(y\in \mathcal {Y}\)
The corresponding classification task is typically named binary (\(\mathcal {Y}=2\)) or multiclass (\(\mathcal {Y}>2\)). The object is presented as a vector \(\mathbf {x} = (x_{1},\ldots ,x_{n})^T\) of measurements which we assume to be embedded in the ndimensional realvalued space \(\mathbb {R}^{n}\). A classifier is adapted to a classification task via a set of labeled training examples \(\mathcal {T}=\{(\mathbf {x}_{i},y_{i})\}_{i=1}^{\mathcal {T}}\), \(y_{i} \in \mathcal {Y}\). The training algorithm and the concept class of the classifier is chosen a priori.
The performance of a classifier is typically evaluated on an independent set of validation samples \(\mathcal {V}=\{(\mathbf {x}'_{i}, y'_{i})\}_{i=1}^{\mathcal {V}}\). Although a classifier is trained to predict a predefined set of classes \(\mathcal {Y}\), it is applied in altered scenarios in which a subset \(y'_{i}\in \mathcal {Y}' \subset \mathcal {Y}\) or a superset \(y'_{i}\in \mathcal {Y}' \supset \mathcal {Y}\) of classes might be used. We will therefore define our performance measures in dependency on the label set of interest.
All performance measures will be based on conditional prediction rates of type
The symbol \(\mathbb {I}_{\left[ \cdot \right] }\) denotes the indicator function and \(\mathcal {X}_{y}\) denotes a set of samples of class y. Other (re)sampling schemes might be applied but will not change the theoretical characteristics discussed in this work. Three types of conditional prediction rates can be distinguished:
While (classwise) sensitivities and confusions occur for the learned set of classes \(\mathcal {Y}\), the external rates describe the classifiers behaviour on samples of foreign classes \(y \in \mathcal {Y}'\setminus \mathcal {Y}\).
A standard performance measure for classification tasks is the empirical accuracy. In dependency on a label set \(\mathcal {Y}\) it can be formulated as
In its standard version, the empirical accuracy is applied to the full set of classes but it might also be applied to a sub or superset of classes. In the following, we focus on the minimal classwise sensitivity as a quality measure for a multiclass classifier system
It can be seen as a lower bound on the overall accuracy
In the context of analysing subsets of classes, \(\mathrm {Smin}_{c}\) shows a monotonicity on the number of classes. For a non empty subset of classes
For external classes \(y'\not \in \mathcal {Y}\), we define the percentage of the most frequently predicted class label \(y \in \mathcal {Y}\) as the purity of \(y'\)
Ordinal Classification
Ordinal classification is a multiclass classification task (\(\mathcal {Y}>2\)), in which we hypothesise that the classes are related via an ordinal relationship
where the subscript \(y_{(i)}\) indicates the position of the class within the ordering (ith class). The symbol \(\prec \) indicates that this ordering is only known (or assumed) for the verbal concepts. Its reflection in feature space cannot be guaranteed. Nevertheless, ordinal classifiers are constrained to this ordering and try to identify an embedding that reflects this ordinality. If such an embedding can be found, the chosen feature representation might be seen as evidence for the hypothesised ordinal relationship; otherwise the ordinal classifier will suffer from a decreased generalisation performance (Fig. 1). The classifier will typically not be able to predict all classes of \(\mathcal {Y}\). Its performance might be measured by the minimal classwise sensitivity \(\mathrm {Smin}_{c}(\mathcal {Y})\). The susceptibility of ordinal classifiers to incorrect class orders is therefore an important characteristic for the screening processes, in which we try to select reflected/relevant class orders among all possible class orders.
Ordinal Classifier Cascades
Our screening procedure is based on ordinal classifier cascades, which can be seen as ensemble multiclass classifiers
based on an ensemble \(\mathcal {E}= \{c_{(i)}\}_{i=1}^{\mathcal {Y}1}\) of \(\mathcal {Y}1\) binary base classifiers
designed for predicting a pair of two consecutive classes within the assumed class ordering. We utilise pairwise training in the following meaning that each base classifier \(c_{(i)}\) is trained on the samples of two consecutive classes
The cascade itself can be seen as an untrainable fusion architecture, which evaluates the base classifiers sequentially according to the assumed class order
Pairwise Training of Base Classifiers
The pairwise training of the base classifiers improves the training and the evaluation time of ordinal classifier cascades when multiple class orders are tested. As the pairwise training of a base classifier \(c_{(i)}\) does not take into account any foreign classes \(\mathcal {Y} \setminus \{y_{(i)}, y_{(i+1)}\}\), a classifier designed for predicting classes \(y_{i}\) and \(y_{j}\)
will be identical for all class orders. The classifier \(c_{i,j}\) is not based on the class order. The screening process through all class orderings can therefore be based on a common set of \(\left( {\begin{array}{c}\mathcal {Y}\\ 2\end{array}}\right) \) base classifiers and their predictions, which can be memoized beforehand. We propose to construct a prediction table as shown in Table 1, which allows the evaluation of all cascades without training and evaluation of the base classifiers.
Error Bounds
As a consequence of their invariant predictions, also the conditional prediction rates of pairwise trained base classifiers are invariant against the order of classes. They can again be precalculated and looked up if necessary (Table 2). Previously, it has been shown that the classwise sensitivities of an ordinal classifier cascade are upper bounded by a set of conditional prediction rates of the corresponding base classifiers [37].
Theorem 1
Let h denote an ordinal classifier cascade
with base classifiers \(\mathcal {E} = \left\{ c_{(1)},\ldots ,c_{(\mathcal {Y}1)}\right\} \). Let furthermore \(\mathcal {X}_{(i)}\) be a nonempty set of samples of class \(y_{(i)}\). Then the sensitivity of h for \(y_{(i)}\) is limited by
Note that neither the construction nor the evaluation of an ordinal classifier cascade is required for providing these upper limits. A limit to the minimal classwise sensitivity of an ordinal classifier cascade can be given by a set of lookups. In this way a large proportion of those cascades that will not pass a predefined threshold \(t\le \mathrm {Smin}_{c}(\mathcal {Y})\) can be rejected before starting the more expensive calculations on the prediction table (Table 1). We utilise the CASCADES algorithms [37], which realises this strategy as preprocessing step in order to reduce the computational burden. Only those cascades, which pass this filter will be evaluated for their real minimal classwise sensitivity.
Screening for Ordinal (Sub)structures
Here, we focus on the analysis of multiclass datasets that are not originally designed for ordinal classification (Fig. 2). That is, we do not assume a unique ordinal relationship for all classes in \(\mathcal {Y}\). The classes in \(\mathcal {Y}\) can instead be seen as a loose collection of known categories within a common context of interest. Nevertheless, subsets of classes \(\mathcal {Y}' \subseteq \mathcal {Y}\) can fulfil our quality criteria for ordinal classification. The corresponding decision regions can be interpreted as an ordinal structure, and we might hypothesise an ordinal relation between the classes in \(\mathcal {Y}'\). Here, memoization tables (Tables 1, 2) proposed for the full set of classes \(\mathcal {Y}\) can directly be repurposed for the analysis of any subset of classes \(\mathcal {Y}'\).
Size of the Search Space
The number of candidate cascades is dependent on the chosen screening task. In all cases, it can be decomposed in the number of class orders \(\mathcal {Y}!\) and the number of subsets \(\left( {\begin{array}{c}\mathcal {Y}\\ k\end{array}}\right) \) of a predefined size k^{Footnote 1}:

Full cascades: \(o(\mathcal {Y})=\mathcal {Y}!\)

Subcascades of size k: \(s_{k}(\mathcal {Y})=\left( {\begin{array}{c}\mathcal {Y}\\ k\end{array}}\right) k!\)

All subcascades: \(s_{all}(\mathcal {Y})=\sum _{k=1}^{\mathcal {Y}} \left( {\begin{array}{c}\mathcal {Y}\\ k\end{array}}\right) k!\)
It might be noteworthy that \(s_{k}(\mathcal {Y})< s_{\mathcal {Y}k}(\mathcal {Y})\) for \(k < \mathcal {Y}/2\).
The ratio of the number of all subcascades and the number of full cascades is approximately given by the Euler number as
and the ratio itself
Examples for numbers of candidate cascades can be found in Table 3. Note that for other (traditional) types of ordinal classifiers the evaluation of each candidate cascade requires the de novo training of the corresponding model. As these training phases by far exceed the time complexity of evaluating the memoization tables (Tables 1, 2), the proposed screenings are infeasible for traditional algorithms and implementations.
Properties of Subcascades
The combinatorics show that the overall number of subcascades exceeds the overall number of full cascades by only a constant factor. Nevertheless, we expect a much higher number of candidate subcascades to pass a threshold on the minimal classwise sensitivity than a number of candidate full cascades.
Theorem 2
Let \(y_{(1)} \prec \ldots \prec y_{(i)} \prec \ldots \prec y_{(\mathcal {Y})}\) denote an arbitrary but fixed class order of the classes in \(\mathcal {Y}\) and \(h: \mathbb {R}^{n} \rightarrow \mathcal {Y}\) an ordinal classifier cascade designed for this class order and based on an ensemble of pairwise trained base classifiers \(\mathcal {E} =\{c_{(i)}\}_{i=1}^{\mathcal {Y}1}\). Let furthermore \(\mathcal {Y}' = \mathcal {Y}\setminus \{y_{(i)}\}\) and \(h': \mathbb {R}^{n} \rightarrow \mathcal {Y}'\) denote an ordinal classifier cascade designed for class order \(y_{(1)} \prec \ldots \prec y_{(i1)} \prec y_{(i+1)} \prec \ldots \prec y_{(\mathcal {Y})}\) based on \(\mathcal {E}' =\mathcal {E} \setminus \{c_{(1)}\} \), if \(i=1\), \(\mathcal {E}' =\mathcal {E} \setminus \{c_{(\mathcal {Y}1)}\} \), if \(i=\mathcal {Y}\) and
otherwise, where
In this case
Proof
From the monotonicity of the minimal classwise sensitivity (Equation 12) we get
Cascade \(h'\) is constructed from the same base classifiers as h. Only one classifier is removed, which closes the option of predicting class label \(y_{(i)}\). In case \(i=1\), all samples that would have been classified as \(y_{(1)}\) by h are directly sent to \(c_{(2)}\) by \(h'\). This is of course also true for all remaining samples with \(c_{(1)}(\mathbf {x})=y_{(2)}\) in h. Therefore for each class label \(y\in \mathcal {Y}'\), the set of samples that receive this class label y by \(h'\) comprises at least all samples that receive class label y by h. The corresponding classwise sensitivities and their minimum can only be increased.
In case \(1<i<\mathcal {Y}\), additionally classifier \(c_{(i1)}\) and \(c_{(i)}\) are replaced by \(c'_{(i1)}\), which redirects those samples that would be sent from \(c_{(i1)}\) to \(c_{(i)}\). The common preamble of base classifiers of h and \(h'\) leads to identical classwise sensitivities for classes \(y_{(1)}, \ldots , y_{(i1)}\). Those classes that received class label \(y_{(i)}\) by h are again reclassified into one of the subsequent classes leading to equivalent or increased classwise sensitivities.
In case \(i=\mathcal {Y}\), the cascades h and \(h'\) share the longest possible preamble of base classifiers. Only the samples receiving class label \(y_{(\mathcal {Y})}\) by h will be reclassified as \(y_{(\mathcal {Y}1)}\) by \(h'\) leading to at least the classwise sensitivity for class \(y_{(\mathcal {Y})}\).
We therefore get for all cases
\(\square \)
Theorem 2 states that a subcascade \(h'\) constructed from a full cascade h by removing a single class \(y_{(i)}\) and by reconnecting the corresponding base classifiers is guaranteed to achieve at least the same minimal classwise sensitivity on \(\mathcal {Y}'\) than h on \(\mathcal {Y}\). It can recursively be applied to \(h'\) and its own subcascades leading to a transitive relationship between a full cascade and all its subcascades. The more classes a cascade comprises the more difficult it will be to reach a predefined minimal classwise sensitivity. We will therefore focus on longer cascades.
Experiments
We evaluated our screening procedure empirically in 10 x 10 crossvalidation experiments [24]. All ordinal classifier cascades were based on linear SVMs (cost=1) [49]. The TunePareto Software was used for the evaluation of base classifiers [41]. All time measure experiments were performed on an AMD Opteron(tm) Processor 6276 with 2.6 GHz (32 cores with HT) and 512 GB RAM.
As mentioned above, alternative systems are unlikely to cope with the computational burden of exhaustive screenings (Sect. 3). Reference classifiers were therefore directly applied to those orders identified by our screening procedure.
We compared our results to those of other multiclass architectures based on independently trained linear SVMs (cost=1). As ordinal reference architectures we used a splitted ordinal classifier cascade (Scc, pairwise training) denoted as “voter 3” by Jiang et al. [26], the ensembles of nested dichotomies (END, \(\mathcal {E}=20\)) [19] and directed acyclic graphs (DAG) [44]. As nonordinal references the oneagainstone (OaO) and the oneagainstall (OaA) architectures were applied [40].
Additionally, our results were compared to a 1D selforganising map (SOM) [30]. This mapping was performed using the standard settings of the Rpackage kohonen [53, 54] (learning rate: [0.05,0.01], but no layer normalisation). The nodes and consequently the clusters were labelled based on the cascade under investigation.
Datasets
The screening procedure is evaluated on eight multiclass datasets from different domains. The main characteristics of the analysed datasets can be found in Table 4. We used data, which comprises different representations of alphanumeric characters (written and spoken) from the machine learning database UCI [13], and gene expression and methylation profiles which were measured either by microarrays or by deep sequencing.
\(d_{1}\): Handwritten digits dataset. The handwritten digits dataset was made publicly available by Alpaydin and Kaynak [3] as part of the UCI repository [13]. This dataset is a collection of bitmaps of digits written by 43 different people. The classes \(y_{1}, \ldots , y_{10}\) correspond to the labels \(0, \ldots , 9\).
\(d_{2}\): Isolet dataset. The Isolet (Isolated Letter Speech Recognition) dataset by Fanty and Cole [17] and downloaded from the UCI repository [13] consists of spoken letters of the alphabet. The classes \(y_{1}, \ldots , y_{26}\) correspond to the labels \(a, \ldots , z\).
\(d_{3}\): Cancer cell lines. The NCI60 dataset (GSE32474) was collected by Pfister et al. [43]. It consists of gene expression profiles from cell lines that derived from different cancer tissue types. The different cancer types are: \(y_{1}\): Leukemia, \(y_{2}\): Breast cancer, \(y_{3}\): Ovarian cancer, \(y_{4}\): Melanoma, \(y_{5}\): Central nervous system (CNS), \(y_{6}\): Colon cancer, \(y_{7}\): Renal cancer, \(y_{8}\): Nonsmall cell lung cancer, \(y_{9}\): Prostate cancer.
\(d_{4}\): Leukemia. As a prephase to the MILE Study (Microarray Innovations In LEukemia) program expression data from blood and bone marrow samples from acute and chronic leukemia patients was collected (GSE13159). Kohlmann et al. [29] could hereby show that standardised experimental protocols can lead to comparable results across different laboratories. The samples have been categorised in different leukemia subgroups and assigned to class labels as follows: \(y_{1}\): ALL (acute lymphocytic leukaemia) with hyperdiploid karyotype, \(y_{2}\): ALL with t(1;19), \(y_{3}\): ALL with t(12;21), \(y_{4}\): AML (acute myeloid leukaemia) complex aberrant karyotype, \(y_{5}\): AML with inv(16)/t(16;16), \(y_{6}\): AML with normal karyotype and other abnormalities, \(y_{7}\): AML with t(11q23)/MLL, \(y_{8}\): AML with t(15;17), \(y_{9}\): AML with t(8;21), \(y_{10}\): cALL/PreBALL without t(9;22), \(y_{11}\): cALL/PreBALL with t(9;22), \(y_{12}\): CLL (chronic lymphocytic leukaemia), \(y_{13}\): CML (chronic myeloid leukaemia), \(y_{14}\): mature BALL with t(8;14), \(y_{15}\): MDS (myelodysplastic syndromes), \(y_{16}\): Nonleukemia and healthy bone marrow, \(y_{17}\): ProBALL with t(11q23)/MLL, \(y_{18}\): TALL.
\(d_{5}\): TCGA. This dataset is a collection of datasets that was generated by the TCGA Research Network (http://cancergenome.nih.gov/). The feature space consists of the intersection of all features of the single datasets. The different cancer types are assigned to class labels as follows: \(y_{1}\): Cholangiocarcinoma (CHOL), \(y_{2}\): Liver hepatocellular carcinoma (LIHC), \(y_{3}\): Pancreatic adenocarcinoma (PAAD), \(y_{4}\): Colon adenocarcinoma (COAD), \(y_{5}\): Rectum adenocarcinoma (READ), \(y_{6}\): Kidney renal papillary cell carcinoma (KIRP), \(y_{7}\): Kidney renal clear cell carcinoma (KIRC), \(y_{8}\): Kidney chromophobe carcinoma (KICH).
\(d_{6}\): C. elegans. Baugh et al. [6] analysed the influence of the homeodomain protein PAL1 of the Clineagespecific gene regulatory network in the model organism C. elegans. They gathered gene expression data of samples of wildtype embryos and mutant embryos at 10 points in time after the 4cellstage of the embryo (GSE2180). We normalised (mas5) and labelled these samples based on the points in time: \(y_{1}\): 0 minutes, \(y_{2}\): 23 minutes, \(y_{3}\): 41 minutes, \(y_{4}\): 53 minutes, \(y_{5}\): 66 minutes, \(y_{6}\): 83 minutes, \(y_{7}\): 101 minutes, \(y_{8}\): 122 minutes, \(y_{9}\): 143 minutes, \(y_{10}\): 186 minutes.
\(d_{7}\): Wound healing. Xiao et al. [56] analysed the total cellular RNA of blood samples taken from severe blunt trauma patients (GSE36809). They performed a time scale analysis and took samples at different points in time after the injury. We normalised (mas5) and summarised the different points in time, measured in hours and labelled them as follows: \(y_{1}\): (0,50), \(y_{2}\): [50,100), \(y_{3}\): [100,150), \(y_{4}\): [150,200), \(y_{5}\): [200,400), \(y_{6}\): [400,600), \(y_{7}\): [600,800), \(y_{8}\): control.
\(d_{8}\): HPAaxis. This dataset by Agba et al. [1] comprises measurements of DNA methylation patterns of different promoters of the glucocorticoid receptor gene (NR3C1) and the imprinting control region of IGF2/H19 of 7 tissues of rats. These profiles especially enclose measurements of the hypothalamicpituitaryadrenalaxis (HPAaxis), a neuroendocrine system which is known to be involved in the regulation of stress reactions. In our experiments, the classes denote the different tissue types \(y_{1}\): Cortex, \(y_{2}\): Hippocampus, \(y_{3}\): Hypothalamus, \(y_{4}\): Pituitary, \(y_{5}\): Adrenal, \(y_{6}\): Skin, \(y_{7}\): Liver.
Experimental Results and Interpretation
In this section we will present the results on the utilised benchmark data sets and give some interpretation on these results. Table 5 provides an overview on the extracted longest cascades. Between one and five cascades were detected per dataset passing the dataset specific highest threshold. They achieved minimal classwise sensitivity thresholds ranging from 0.75 to 0.9. The theoretical maximum number of cascades is for all datasets more than 800 times larger than the number of returned cascades. A general overview about the numbers of cascades that pass the first threshold in comparison to the number that pass the second threshold (dataset specific) is given in the Supplementary Information.
Longest cascades. Graph representations of the longest cascades can be found in Figs. 3 and 4. It can be seen that all found cascades overlap in at least one class except for the TCGA data \(d_5\). The extrema in the amount of overlap are \(d_{5}\) which shows no overlap and cascades of \(d_{2}\) and \(d_6\) which overlap in all except one class. Further structures that can be observed are that cascades differ in one class at the beginning (\(d_{2}\)) in the middle (\(d_{6}\)) or at the end (\(d_{1}\)), but especially if more than two cascades pass the minimal sensitivity criterion the cascades also align to more complex graphs (\(d_{3}, d_{8}\)).
Classes that are not included into the subcascade. Extended confusion tables for the longest cascades are shown in the Supplementary Information. Figure 5 shows exemplarily the confusion table for the TCGA dataset \(d_5\). As focusing on the detection of subcascades, classes exist that are not part of the longest cascades, termed Others in Figs. 3 and 4. These classes can be split in classes that are not part of any of the longest subcascades (uncovered classes) and classes that are not part of the specific subcascade but part of another of the longest subcascades returned (alternative class). Presenting a sample of one of these classes to a specific subcascade, these samples are necessarily classified as one of the cascade classes as there is no rejection possibility. A general overview of how distinct the allocation of these additional classes to cascade classes is, provides the mean of the purity values (Eq: 13) (Table 5).
De novo calculation vs memoization scheme. In order to demonstrate the impact of the proposed memoization techniques we performed additional experiments with a traditional implementation of an ordinal classifier cascade. That is each training phase is realised as a de novo adaptation of the classification model. Both versions were based on the same implementation of base classifiers (SVM) [41] and did not use any parallelisation. The runtime of both approaches was compared on dataset \(d_{4}\) (\(\mathcal {Y} = 18\)), which corresponds to \(s_{all}(\mathcal {Y})>10^{18}\) subcascades. Again a \(10\times 10\) CV was conducted for each subcascade. The memoization techniques allowed to accomplish this screening in about 1868 minutes, while the de novo calculation of a single \(10\times 10\) CV on all \(\mathcal {Y} = 18\) classes (fixed class order) required 798 minutes. The de novo calculation is therefore not able to perform this screening in a feasible amount of time.
Performance comparison. As the complexity of an ordinal classifier cascade is rather low in comparison to the complexity of the chosen reference classifiers we have chosen to run these algorithms only for the longest subcascades identified by the initial screening. The minimal classwise sensitivities achieved by the reference classifiers on the longest subcascades are shown in Table 6. An overview on all sensitivities is given in the Supplementary Information. The comparison to the nonordinal architectures (OaO, OaA) demonstrate the impact of (correct) ordinal information by a higher performance of the ordinal classifier cascade (Occ). In five out of eight datasets (\(d_{4}d_{8}\)) cascades exist for which the OaO achieved lower minimal sensitivities and did not even pass the minimal threshold (t). For the OaA this was observed only twice (\(d_{6}, d_{7}\)). The ordinal architectures, the ensembles of nested dichotomies (END) and directed acyclic graphs (DAG), perform well on these datasets and constrained to the selected orders. Both approaches did not pass the minimal threshold t on \(d_{7}\). The END did also slightly pass the threshold on \(d_{4}\). The splitter ordinal classifier cascade (Scc) achieved high minimal classwise sensitivities only on \(d_{8}\) dataset and \(d_{5}\). It would have deselected individual classes in our experiments. With the exception of the first cascade of \(d_{8}\) SOM was also not able to separate all classes of the cascades.
Handwritten digits dataset. The two longest cascades of length five which are found in \(d_{1}\) the digit dataset overlap at three classes. A meaningful interpretation of the direction is not obvious, but the returned cascades show possible structural properties. There are three digits (0, 4, 8) that are uncovered classes and in both cascades the 8 is mainly allocated to the second cascade position, corresponding once to the digit 2 and once to 1. If 1 is placed at the second position, 0 is mainly classified as 6, whereas in the other cascade (2 at position two), 0 is classified as 5 (the third position of the cascade), which means that in the first case the first decision region is placed in a way that the samples of class 6 and 0 are on the same side, whereas in the second case they lay on opposite sides and hence the directions of the first relation in the feature space for both cascades differ. The two cascades split at the second and at the last position of the cascade and the purity values for the alternative classes show that on average half of the samples of the alternative classes are assigned to one intracascade class.
Isolet dataset. Dataset \(d_{2}\) corresponds to a spoken representation of letters. Two cascades of length 10 pass a minimal classwise sensitivity threshold of 0.9. The uncovered classes are classified to around 70% on average to the same class and the heatmaps (Supplementary Information) show that they are not welldistributed as many entries are zero. The two cascades differ only in their first position (s vs. y). If not part of the cascade the class s is completely assigned to the second cascade class f, whereas in the case in which y is not part of the cascade its assignment fades out till the fifth intracascade class. The different classes might be grouped based on the phonetic alphabet. In this case s, f, m and n share the sound [\(\varepsilon \)], but y does not, which might be one possible interpretation of what is observed. The last five classes (b, d, g, z, c) of both cascades share the sound . The further letters (e, p, t, v) that are pronounced using this sound are also mainly classified as one of these classes. Why they are not included in the main cascade is not clear from the semantic level. The cascade, however, reveals that the samples and hence the classes are located in a way that including one of those classes seem to change the direction of the decision region sequence in a way that would lead to shorter sequences.
Cancer cell lines. The cancer cell line dataset \(d_{3}\) shows four cascades of length five. There is only one class, \(y_{2}\): breast cancer, that is not part of any cascade and reveals a low purity. All four cascades share their first class, \(y_{1}\): leukemia and their center class, \(y_{3}\): ovarian cancer. If not part of the returned cascade no sample of \(y_{6}\): colon cancer or \(y_{8}\): lung cancer is classified as \(y_{9}\): prostate cancer, however the other way round \(y_{9}\): prostate cancer splits halfhalf between those classes, which reveals that the decision regions differ depending on the assumed order and outlines the importance of the fact that the assumed order has to be appropriately reflected in the feature representation. The correlation within this dataset can be explained by messenger molecules. Both prostate as well as ovarian are affected by sex hormones [28]. The fact that leukemia is twice as prevalent in men as in women [2] does not rule out the influence of sex hormones and may explain why leukemia is closer to prostate cancer. A similar relationship can also be found between ovarian and kidney cancer. Here, it is assumed that estrogen has a protective effect on the kidney, while testosterone damages the kidney [7, 48]. Likewise, an effect of sex hormones on the CNS is clearly confirmed [57]. Here, a decrease in hormone concentration during aging can be attributed to loss of neuronal functions. Various studies support a neuroprotective role of estrogens on this neuronal decline [58]. The other cascade connects neurotransmittercontrolled tissues. The central nerve system is controlled by a diversity of neurotransmitters [42]. Nonadrenergic, noncholinergic nerves are known to control the gut as well as the urogenital tract [5]. Since the lung develops embryonically seen from the foregut, it is not surprising that nonadrenergic, noncholinergic nerves are also present [5]. In addition, secreted neurotransmitter such as glutamate are known to be involved in Tcell leukemia [20].
Leukemia. Both cascades found in \(d_{6}\) share exactly one class \(y_{5}\): AML with inv(16) /t(16,16). In both cascades classes that correspond to lymphoid leukemia (LL) are placed before this class and afterwards classes that correspond to myeloid leukemia (ML) or syndromes that might develop into a myeloid leukemia, with the exception of the healthy bone marrow. It can be observed that the AML classes that are not included into one of the cascades are not classified as an own entity according to the WHO classification system [4]. The samples of those classes might be considered as not distinct enough to be included. Furthermore, it was shown that certain subtypes of LL show a higher incidence for the expression of myeloid antigens and these are pro BAll and early TALL [55] which are the LL classes at the found MLLL transition. This might lead to the hypothesis that the found subcascades represent a sequence of similar but still distinct subtypes. The cascade showing a higher average purity for both, the alternative and the uncovered classes (others), reflects the grouping of ML and LL also in these classes. The observed characteristic that classes of similar concepts form subcascades within the context of a larger research topic, reveals that the proposed screening procedure is able to find orders within such a group but is also at the same time able to relate these groups using the same consecutive and hence overarching pattern.
TCGA data. For the TCGA dataset \(d_{5}\) consisting of eight classes, a cascade of length four achieved a minimal classwise sensitivity higher than 90%. For \(d_{5}\) all uncovered classes are assigned with a purity of at least 80%. An order for most tissue derived tumors of endodermal or mesodermal origin can be found here. Only the data for kidney renal clear cell carcinoma (KIRC) cannot be assigned to the mesodermal germ layer. Similar findings were made by Berman [8] in his classification analysis. Here, he found that epithelial tumors with mesodermal origin in particular exhibit class independent behavior [8]. In line with this, there are various differentiation protocols to derive functional pancreatic or liver cells from human pluripotent stem cells (hPSCs) while few protocols are able to induce effective kidney differentiation [32]. Thus, differentiation of renal cells and associated tumors is somehow more complex in comparison to other tissues. One reason for this might be the complex architecture of the kidney with all its functional units [32] that require a variety of differentiation factors not only specific for kidney.
C.elegans and wound healing data. In contrast to the other datasets, the class labels of \(d_{6}\) and \(d_{7}\) are given by consecutive time points and therefore might be seen as hypothesised ordinal class labels. These timelines were not completely reconstructed in our experiments. Nevertheless the identified subcascades followed the assumed order.
For C.elegans \(d_{6}\) two cascades passed a minimal classwise sensitivity threshold of 75% that included all classes despite one. Both orderings correspond to a progression in time and either the class corresponding to samples taken at minute 66 (\(y_{5}\)) or the ones taken at minute 83 (\(y_{6}\)) are skipped. In both cases the uncovered classes were assigned to their assumed ordinal neighbours with a purity over 90%. The high similarity of classes \(y_{5}\) and \(y_{6}\) are also indicated by a decreased classification performance of the corresponding base classifier (74% minimal classwise sensitivity, data not shown) suggesting that in this period the expression does not change a lot. The detected cascades therefore rather constructs an ordinal sequence of discriminable events than a reconstruction of the timeline.
Similar observations can be gained for the wound healing dataset \(d_{7}\). Here two cascades of length four achieved a minimal classwise sensitivity higher than 80%. They differ in their second class. In the first cascade \(y_{6}\) is chosen, which corresponds to the samples taken between 400 and 600 minutes after injury. In the second one, \(y_{7}\) is covered, which comprises to the samples taken between 600 and 800 minutes after injury. If uncovered, \(y_{6}\) receives the class label of \(y_{7}\) (and vice versa) with a purity of over 80%. Both cascades were not able to cover three classes that reflect earlier time points (\(y_{2}\), \(y_{4}\), \(y_{5}\)). These external classes are assigned to their assumed ordinal neighbours (within the cascade) with a purity of at least 90%. As the class definitions are again based on prior assumptions they most likely do not correspond to large changes in the process. This can be also seen in the minimal classwise sensitivity of the binary classifiers, which are below 0.8 for timebased neighbouring classes, except for the first class \(y_{1}\) and the control class \(y_{8}\). Based on the classification tendency of the uncovered classes one might conclude that the expression profile of the first 50 minutes after injury and the control one is quite distinct from the process in between. Furthermore the assignment of the uncovered and alternative classes are consistent with the findings that wound healing consists of overlapping phases [51].
HPAaxis methylation data. Not considering both directions and the classes at the two last cascade positions the subcascades of \(d_{8}\) reduce to two orderings:
(\(liver {\prec } skin {\prec }hippocampus/hypothalamus {\prec }pituitary {\prec }adrenal \)). One part of one cascade \(hypothalamus {\prec }pituitary {\prec }adrenal \) is reported as HPAaxis and known to control stress reaction. The role of the hippocampus in the control of stress reaction and its interconnection towards the hypothalamus might lead to the effect that hippocampus and hypothalamus are not discriminable anymore if a relation reflecting the interplay of stress response is considered. The neighbourhood between skin and the HPAaxis within the cascade reflects the findings by Agba et al. [1] that methylation patterns in skin are closely related to the patterns within the HPAaxis.
Discussion and Conclusion
Although the task of classification is mainly based on the assumption of pairwise distinct concepts, it is highly unlikely that no other relationship among the classes exists. Nevertheless, these relations might be implicit or even unknown. They might be revealed by analysing suitable data representations.
In this work, we utilise ordinal classifier cascades for detecting ordinal relations in each subset of classes. This multiclass architecture is highly sensitive to the order of classes as decision regions of early classes overlay decision regions of later ones. Interestingly, this property causes more rejections than the performance of the corresponding base classifiers. In our study, almost all classes would have been separable in pairwise classification experiments.
From a combinatorial point of view the number of candidate cascades increases exponentially in the number of classes. The proposed exhaustive screening is therefore out of range for approaches that require a de novo generation of all classification models. It would require the training of an ordinal classifier for each subset of classes and each permutation thereof. In our approach we circumvent this complexity by the extensive use of memoization techniques and theoretical error bounds. This allowed us to perform exhaustive screens for datasets with 26 classes, which corresponds to the evaluation of more than \(10^{27}\) candidate orders. The runtime of our approach is mainly determined be the training of the required number of base classifiers which is quadratic in the overall number classes. It therefore brings into range ordinal classifier cascades based on more sophisticated but also more complex base classifiers [14, 15, 25, 38, 46, 59]. To our knowledge, our screening is the first one that applies memoization techniques to ordinal classification. It might be a blue print for other ordinal classifiers or fusion architectures.
Due to the transitive relationship between the minimal classwise sensitivities of a cascade and its subcascades, the probability of larger ordinal structures decreases. Long ordinal subcascades must be seen as rare events. As such, we consider the longest cascades that pass the highest threshold on the minimal classwise sensitivity as being the most informative ones. Of course, both length and threshold are clearly dependent on the classification task and the chosen data representation. There is no guarantee that cascades of suitable length will be found.
The identified subcascades can be seen as ordered subsets of well separable classes. They might be considered as a roadmap of axes organising the complete collection of classes. However, not all classes are connected in this network. The labels of these external classes cannot be predicted by the ordinal classifier cascades. As most base classifiers do not possess a rejection option, the samples of the external classes will in general be assigned to cascade classes. Although the classification of these samples is incorrect, the association to one of the ordinal classes can reveal properties of the external classes. In our experiments, we observed a large set of external classes that are assigned to a small set of consecutive classes of an ordinal classifier cascade. They might be a hint to a too fine granular classification (e.g. caused by inappropriate sampling), which cannot be separated in feature space. The whole process of identifying subcascades can also be seen as a selection of a maximal number of classes that can collectively be discriminated. This in turn can give rise to new hypothesis about the processes involved like “Is this entity really a distinct new group?”, or “There could be some stagnation in the development after a certain time”.
Overall, our experiments show that ordinal substructures of classes can be detected in feature space and that they corroborate existing findings. These structures might be seen as hypotheses for ordinal relations and also for discriminative entities among the corresponding class concepts, which can be investigated in a more detailed analysis. From a theoretical point of view, the aggregation of ordinal subcascades remains an open research question. It might be addressed in form of multiclass architectures that directly address partial orders of classes.
Notes
For simplification we allow cascades of length \(k=1\) in this section.
References
Agba O, Lausser L, Huse K, Bergmeier C, Jahn N, Groth M, Bens M, Sahm A, Gall M, Witte O, Kestler HA, Schwab M, Platzer M (2017) Tissue, sex, and agespecific DNA methylation of rat glucocorticoid receptor gene promoter and insulinlike growth factor 2 imprinting control region. Physiol Genomics 49(11):690–702
Allain E, Venzl K, Caron P, Turcotte V, Simonyan D, Gruber M, Le T, Lévesque E, Guillemette C, Vanura K (2018) Sexdependent association of circulating sex steroids and pituitary hormones with treatmentfree survival in chronic lymphocytic leukemia patients. Ann Hematol 97(9):1649–1661
Alpaydin E, Kaynak C (1998) Cascaded classifiers. Kybernetika 34:369–374
Arber D, Orazi A, Hasserjian R, Thiele J, Borowitz M, Le Beau M, Bloomfield C, Cazzola M, Vardiman JW (2016) The 2016 revision to the World Health Organization classification of myeloid neoplasms and acute leukemia. Blood 127(20):2391–2405
Barnes P (1984) The third nervous system in the lung: physiology and clinical perspectives. Thorax 39(8):561–567
Baugh LR, Hill AA, Claggett JM, HillHarfe K, Wen JC, Slonim DK, Brown EL, Hunter CP (2005) The homeodomain protein PAL1 specifies a lineagespecific regulatory network in the C. elegans embryo. Development 132(8):1843–1854
Baylis C (2009) Sexual dimorphism in the aging kidney: differences in the nitric oxide system. Nat Rev Nephrol 5(7):384–396
Berman J (2004) Tumor classification: molecular analysis meets Aristotle. BMC Cancer 4(1):10
Bishop C (2006) Pattern recognition and machine learning. Springer, New York
Breiman L, Friedman JH, Olshen RA, Stone CJ (1984) Classification and regression trees. The Wadsworth statistics/probability series. Chapman and Hall/CRC, Boca Raton
Cardoso J, Pinto da Costa J (2007) Learning to classify ordinal data: the data replication method. J Mach Learn Res 8:1393–1429
Crammer K, Singer Y (2001) Pranking with ranking. In: Dietterich T, Becker S, Ghahramani Z (eds) Proceedings of the 14th international conference on neural information processing systems: natural and synthetic. Advances in neural information processing systems, vol 14. MIT Press, Cambridge, pp 641–647
Dheeru D, Karra TE (2017) UCI machine learning repository
Ding S, Zhang N, Zhang X, Wu F (2017) Twin support vector machine: theory, algorithm and applications. Neural Comput Appl 28(11):3119–3130
Ding S, Zhao X, Zhang J, Zhang X, Xue Y (2019) A review on multiclass TWSVM. Artif Intell Rev 52(2):775–801
Edla D, Jana P (2012) A prototypebased modified DBSCAN for gene clustering. Procedia Technol 6:485–492
Fanty M, Cole R (1991) Spoken letter recognition. In: Lippmann RP, Moody JE, Touretzky DS (eds) Advances in neural information processing systems 3. MorganKaufmann, New York, pp 220–226
Frank E, Hall M (2001) A simple approach to ordinal classification. In: Raedt LD, Flach P (eds) Proceedings of the machine learning: ECML 2001—12th European conference on machine learning, Freiburg, Germany, September 5–7, 2001, lecture notes in artificial intelligence, vol 2167. Springer, Berlin, pp 145–156
Frank E, Kramer S (2004) Ensembles of nested dichotomies for multiclass problems. In: Proceedings of the 21st international conference of machine learning (ICML2004). ACM Press, London, pp 305–312
Ganor Y, Levite M (2014) The neurotransmitter glutamate and human T cells: glutamate receptors and glutamateinduced direct and potent effects on normal human T cells, cancerous human leukemia and lymphoma T cells, and autoimmune human T cells. J Neural Transm 121(8):983–1006
Guyon I, Elisseeff A (2003) An introduction to variable and feature selection. J Mach Learn Res 3:1157–1182
Hastie T, Tibshirani R, Friedman JH (2001) The elements of statistical learning. Springer, New York
Hühn J, Hüllermeier E (2009) Is an ordinal class structure useful in classifier learning? J Data Min Model Manag 1(1):45–67
Japkowicz N, Shah M (2011) Evaluating learning algorithms: a classification perspective. Cambridge University Press, New York
Jayadeva A, Khemchandani R, Chandra S (2007) Twin support vector machines for pattern classification. IEEE Trans Pattern Anal Mach Intell 29(5):905–910
Jiang Z, Sun G, Gu Q, Chen D (2014) An ordinal multiclass classification method for readability assessment of Chinese documents. In: Buchmann R, Kifor CV, Yu J (eds) Knowledge science, engineering and management. Springer, Cham, pp 61–72
Kestler HA, Lausser L, Lindner W, Palm G (2011) On the fusion of threshold classifiers for categorization and dimensionality reduction. Comput Stat 26(2):321–340
Key T (1995) Hormones and cancer in humans. Mutat Res Fundam Mol Mech Mutagen 333(1):59–67
Kohlmann A, Kipps T, Rassenti L, Downing J, Shurtleff S, Mills K, Gilkes A, Hofmann WK, Basso G, Dell’Orto M, Foà R, Chiaretti S, De Vos J, Rauhut S, Papenhausen P, Hernández J, Lumbreras E, Yeoh A, Koay E, Li R, Wm Liu, Williams P, Wieczorek L, Haferlach T (2008) An international standardization programme towards the application of gene expression profiling in routine leukaemia diagnostics: the microarray innovations in LEukemia study prephase. Br J Haematol 142(5):802–807
Kohonen T (1995) Selforganizing maps, vol I. Springer, Berlin
Kotsiantis S, Pintelas P (2004) A cost sensitive technique for ordinal classification problems. In: Vouros G, Panayiotopoulos T (eds) Proceedings of the methods and applications of artificial intelligence: third hellenic conference on AI (SETN 2004), Samos, Greece, May 5–8, 2004. Springer, Berlin, pp 220–229
Lam A, Freedman B, Morizane R, Lerou P, Valerius M, Bonventre J (2014) Rapid and efficient differentiation of human pluripotent stem cells into intermediate mesoderm that forms tubules expressing kidney proximal tubular markers. J Am Soc Nephrol 25(6):1211–1225
Lattke R, Lausser L, Müssel C, Kestler HA (2015) Detecting ordinal class structures. In: Schwenker F, Roli F, Kittler J (eds) Proceedings of the multiple classifier systems—12th international workshop (MCS 2015), Günzburg, Germany, June 29–July 1, 2015. Image processing, computer vision, pattern recognition, and graphics, vol 9132. Springer, Cham, pp 100–111
Lausser L, Müssel C, Kestler HA (2013) Measuring and visualizing the stability of biomarker selection techniques. Comput Stat 28(1):51–65
Lausser L, Schmid F, Platzer M, Sillanpää MJ, Kestler HA (2016) Semantic multiclassifier systems for the analysis of gene expression profiles. Arch Data Sci Ser A 1(1):157–176
Lausser L, Szekely R, Schirra LR, Kestler HA (2017) The influence of multiclass feature selection on the prediction of diagnostic phenotypes. Neural Process Lett 48(2):863–880
Lausser L, Schäfer LM, Schirra LR, Szekely R, Schmid F, Kestler HA (2019) Assessing phenotype order in molecular data. Sci Rep 9(1):11746
Lausser L, Szekely R, Klimmek A, Schmid F, Kestler HA (2020) Constraining classifiers in molecular analysis: invariance and robustness. J R Soc Interface 17(163):20190612
Lin HT, Li L (2012) Reduction from costsensitive ordinal ranking to weighted binary classification. Neural Comput 24(5):1329–1367
Lorena AC, de Carvalho ACPLF, Gama JMP (2009) A review on the combination of binary classifiers in multiclass problems. Artif Intell Rev 30:19–37
Müssel C, Lausser L, Maucher M, Kestler HA (2012) Multiobjective parameter selection for classifiers. J Stat Soft 46(5):1–27
Nicoll R, Malenka R, Kauer J (1990) Functional comparison of neurotransmitter receptor subtypes in mammalian central nervous system. Physiol Rev 70(2):513–565
Pfister T, Reinhold W, Agama K, Gupta S, Khin S, Kinders R, Parchment R, Tomaszewski J, Doroshow J, Pommier Y (2009) Topoisomerase I levels in the NCI60 cancer cell line panel determined by validated ELISA and microarray analysis and correlation with indenoisoquinoline sensitivity. Mol Cancer Therap 8(7):1878–1884
Platt JC, ShaweTaylor J, Cristianini N (1999) Large margin DAG’s for multiclass classification. In: Solla SA, Leen TK, Müller K (eds) Proceedings of the 12th international conference on neural information processing systems: minisymposium on causality in time series, advances in neural information processing systems, vol 12. MIT Press, Cambridge, pp 547–553
Rivest RL (1987) Learning decision lists. Mach Learn 2(3):229–246
Schwenker F, Kestler HA, Palm G (2001) Three learning phases for radialbasisfunction networks. Neural Netw 14(4–5):439–458
Taudien S, Lausser L, GiamarellosBourboulis EJ, Sponholz C,FS, Felder M, Schirra LR, Schmid F, Gogos C,SG, Petersen BS, Franke A, Lieb W, Huse K, Zipfel PF, Kurzai O, Moepps B, Gierschik P, Bauer M, Scherag A, Kestler HA, Platzer M (2016) Genetic factors of the disease course after sepsis: rare deleterious variants are predictive. EBioMedicine 12:227–238
Valdivielso J, JacobsCachá C, Soler MJ (2019) Sex hormones and their influence on chronic kidney disease. Curr Opin Nephrol Hypertens 28(1):1–9
Vapnik VN (1998) Statistical learning theory. Wiley, New York
Waegeman W, Baets BD, Boullart L (2008) Roc analysis in ordinal regression learning. Pattern Recognit Lett 29(1):1–9
Wang PH, Huang BS, Horng HC, Yeh CC, Chen YJ (2018) Wound healing. Chin Med Assoc 81(2):94–101
Webb AR (2002) Statistical pattern recognition, 2nd edn. Wiley, Chichester
Wehrens R, Buydens L (2007) Self and superorganizing maps in R: the Kohonen package. J Stat Softw 21(5):1–19
Wehrens R, Kruisselbrink J (2018) Flexible selforganizing maps in Kohonen 3.0. J Stat Softw 87(7):1–18
Wiernik P, Dutcher J, Gertz M (2018) Neoplastic diseases of the blood. Springer, Berlin
Xiao W, Mindrinos M, Seok J, Cuschieri J, Cuenca A, Gao H, Hayden D, Hennessy L, Moore E, Minei JP, Bankey P, Johnson J, Sperry J, Nathens A, Billiar T, West M, Brownstein B, Mason P, Baker H, Finnerty C, Jeschke M, Lòpez MC, Klein M, Gamelli R, Gibran N, Arnoldo B, Xu W, Zhang Y, Calvano S, McDonaldSmith G, Schoenfeld D, Storey J, Cobb J, Warren H, Moldawer L, Herndon D, Lowry S, Maier R, Davis R, Tompkins R (2011) A genomic storm in critically injured humans. J Exp Med 208(13):2581–2590
Young W, Goy R, Phoenix C (1964) Hormones and sexual behavior. Science 143(3603):212–218
Zárate S, Stevnsner T, Gredilla R (2017) Role of estrogen and other sex hormones in brain aging: neuroprotection and DNA repair. Front Aging Neurosci 9:430
Zhang N, Ding S, Zhang J, Xue Y (2018) An overview on restricted Boltzmann machines. Neurocomputing 275:1186–1199
Acknowledgements
We thank Konstanze Döhner and Thomas Barth for helpful comments on some of the found orderings. The research leading to these results has received funding from the German Research Foundation (DFG, GRK 2254 HEIST and SFB 1074 project Z1), and the Federal Ministry of Education and Research (BMBF, e:Med, conFirm, id 01ZX1708C) all to HAK.
Funding
Open Access funding enabled and organized by Projekt DEAL.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The authors declare that they have no conflict of interest.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Ludwig Lausser and Lisa M. Schäfer have contributed equally to this work.
Electronic supplementary material
Below is the link to the electronic supplementary material.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Lausser, L., Schäfer, L.M., Kühlwein, S.D. et al. Detecting Ordinal Subcascades. Neural Process Lett 52, 2583–2605 (2020). https://doi.org/10.1007/s11063020103620
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11063020103620
Keywords
 Ordinal classification
 Classifier cascades
 Error bounds
 Subsets
 Supersets