Towards Adaptive Unknown Authentication for Universal Domain Adaptation by Classifier Paradox

Universal domain adaptation (UniDA) is a general unsupervised domain adaptation setting, which addresses both domain and label shifts in adaptation. Its main challenge lies in how to identify target samples in unshared or unknown classes. Previous methods commonly strive to depict sample"confidence"along with a threshold for rejecting unknowns, and align feature distributions of shared classes across domains. However, it is still hard to pre-specify a"confidence"criterion and threshold which are adaptive to various real tasks, and a mis-prediction of unknowns further incurs misalignment of features in shared classes. In this paper, we propose a new UniDA method with adaptive Unknown Authentication by Classifier Paradox (UACP), considering that samples with paradoxical predictions are probably unknowns belonging to none of the source classes. In UACP, a composite classifier is jointly designed with two types of predictors. That is, a multi-class (MC) predictor classifies samples to one of the multiple source classes, while a binary one-vs-all (OVA) predictor further verifies the prediction by MC predictor. Samples with verification failure or paradox are identified as unknowns. Further, instead of feature alignment for shared classes, implicit domain alignment is conducted in output space such that samples across domains share the same decision boundary, though with feature discrepancy. Empirical results validate UACP under both open-set and universal UDA settings.


Introduction
Unsupervised domain adaptation (UDA) (Ben-David et al., 2010;Tzeng et al., 2014;Long et al., 2015) aims to adopt a fully-labeled source domain to help the learning of unlabeled target domain.Existing UDA methods mainly attempt to generate domain-invariant representations by reducing the distribution discrepancy across domains with some distance metrics, such as Maximum Mean Discrepancy (MMD) (Tzeng et al., 2014), or by adversarial learning (Ganin et al., 2016) between the feature generator and domain discriminator.However, they commonly make a strong assumption that the source and target domains share the same label set, which limits their applicability to many real-world applications.
In real tasks, the label sets from source and target domains are usually different.For example, with the emergence of Big Data (Sagiroglu & Sinanc, 2013), large-scale labeled datasets like ImageNet-1K (Russakovsky et al., 2015) and Google Open Images (Krasin et al., 2017) are readily accessible as the source domains, while the target domain may only contain a subset of the source classes, leading to a so-termed Partial Domain Adaptation (PDA) (Cao et al., 2018).On the other hand, in real open learning scenes, target domains usually have unknown classes not covered in the source domain, leading to a setting of so-called Open-Set Domain Adaptation (OSDA) (Panareda Busto & Gall, 2017).The learning purpose is to classify the target data in known classes correctly, while reject data in all unknown classes as "unknown".Recently, Universal Domain Adaptation (UniDA) (You et al., 2019), a general learning setting without the need of prior knowledge on label sets across domains, has attracted increasing attention.Obviously, UniDA is a more realistic UDA setting since the target ground-truth is actually not available in real tasks.
A main learning challenge posed in such setting is how to identify the target samples in unshared or unknown classes.Previous methods mainly strive to depict the sample "confidence" along with a pre-defined threshold to detect target unknowns, then align distributions of shared classes across domains.The commonly adopted confidence criteria include prediction entropy (Saito et al., 2020), source similarity (Panareda Busto & Gall, 2017), classifier discrepancy (Liang et al., 2021) and minimum inter-class distance (Saito & Saenko, 2021), etc.Though with great progress, it is still hard to pre-specify a universal "confidence" criterion and threshold that are adaptive to various complicated real tasks.Furthermore, a mis-prediction of unknowns further incurs mis-alignment of features in shared classes, probably leading to negative transfer.To this end, we propose a new UniDA method with adaptive unknown authentication by classifier paradox (UACP).Specifically, the prediction paradox from two types of predictors is adopted in UACP to adaptively identify target unknowns, since samples with paradoxical predictions are probably unknowns belonging to none of the source classes.
In UACP, a composite classifier is designed with two types of predictors for classification and verification, respectively.The MC predictor classifies samples to one of the multiple source classes, and the corresponding binary OVA predictor further verifies if a sample belongs to the predicted class by MC predictor.Samples with paradoxical predictions are rejected as unknowns.An illustration is shown in Figure 1, the sample with true label (TL) "tiger" denoted by purple star is classified to "cat" by MC predictor, thus it is declined by the "airplane" and "dog".But meanwhile, the corresponding OVA predictor (cat vs others) gives a paradoxical prediction that it belongs to the "others" rather than "cat", then it is predicted as an "unknown" sample not included in the source classes, since the output space of OVA predictor contains both known source and unknown classes in the negative "others" category.At the same time, the sample with TL "cat" denoted by orange star has consistent predictions from the two predictors, thus it is classified to the known "cat" class.Moreover, different from previous feature alignment for shared classes, implicit domain alignment is conducted in the output space by a domaininvariant classifier.Specifically, features are generated for both domains such that the classifier correctly classifies source samples, and captures the target structure as well.In this way, samples across domains share the same decision boundary, though with different feature distributions.The main contributions of this paper are organized as follows, • We propose adaptive unknown authentication by classifier paradox for UniDA, such that target unknowns are adaptively identified by prediction paradox from two types of predictors.
• We propose implicit domain alignment in the output space for UniDA, such that samples across domains share the same decision boundary, though exhibit feature discrepancy.

Related Works
In this section, we briefly review the related UDA methods, including closed-set UDA, OSDA, and UniDA methods in separate sub-sections.

Unsupervised Domain Adaptation
Closed-set UDA is the classical scenario in which source and target domains have distribution shift but consistent label sets.UDA approaches commonly attempt to reduce the distribution discrepancy for domain-invariant features across domains.The two main categories are statistical-based methods and adversarial-based methods.Statistical-based UDA methods directly minimize a discrepancy metric across domains, such as MMD (Tzeng et al., 2014), multikernel MMD (Long et al., 2015), joint MMD (Long et al., 2017), and correlation (Sun et al., 2016), etc. Adversarial-based methods maximize the domain confusion via adversarial learning between feature generator and domain discriminator, or between different classifiers (Bousmalis et al., 2016;Ganin et al., 2016;Saito et al., 2018).Besides, some works also utilize learning strategies from other fields, such as curriculum learning (Choi et al., 2019) Fig. 2 Overall framework of UACP, which includes a shared feature extractor F for source and target domains, and a composite classifier with a MC predictor and binary OVA predictors.The MC and OVA predictors share the layer neurons, i.e., the first K neurons construct the MC predictor, while the k-th and (K + k)-th neurons construct the OVA predictor for the k-th class.

Universal Domain Adaptation
In UniDA, both domains may contain unshared or private classes, while no prior information about the target label set is provided.You et al. ( 2019) quantify sample-level transferability to distinguish shared and private classes in each domain.Later on, Saito et al. (2020) apply neighborhood clustering and entropy separation to encourage known target samples close to source prototypes while away from unknown classes.Fu et al. (2020) present calibrated multiple uncertainty to detect open classes more accurately.Li et al. (2021) utilize the intrinsic structure of target samples and provide a unified framework to deal with different sub-cases of UniDA.Saito & Saenko (2021) adopt the minimum inter-class distance in source domain as the threshold to identify unknowns in target.Some recent researches also study UniDA in different scenarios.Yu et al. (2021) adopt the divergence between two classifiers as sample confidence in noisy UniDA.Liang et al. (2021) develop an informative consistency score based on two classifiers to help distinguish unknown samples in source-free UniDA.
Different from existing works (Yu et al., 2021;Liang et al., 2021) that adopt the discrepancy of two same-structured classifiers for describing sample "confidence", our proposed UACP exploits the prediction paradox between the two types of predictors (MC and OVA) to directly identify unknowns.Besides, OVANet (Saito & Saenko, 2021) utilizes OVA classifiers for UniDA, in order to seek the minimum source inter-class distance as the unknown threshold, while UACP adopts OVA classifiers for adaptive unknown authentication.

Methodology
In this section, we introduce UACP for UniDA by classifier paradox.We first revisit the problem setting of UniDA, and describe the network architecture of UACP.After that, we show individual components in UACP in detail.

Problem Setting and Network Architecture
In UniDA, we are given a labeled source domain D s = {(x s i , y s i )} Ns i=1 with N s labeled source samples, and an unlabeled target domain D t = {(x t i )} Nt i=1 with N t unlabeled target samples, where x s i and x t i denote the source and target samples, respectively.y s i ∈ {1, • • • , K} is the class label for sample x s i , and K is the number of source classes.Define L s and L t as the label sets of source and target domains, respectively.Then the class set shared across domains is denoted as L s ∩ L t .L s − L t and L t − L s represent the source-private and target-private class sets, respectively.We mainly focus on the scenario with L t − L s =∅, including both OSDA and UniDA, and the learning goal is to classify target samples into |L s ∩ L t | + 1 classes, that is, to classify known target samples to the shared source classes, while recognize unknown target samples in all target-private classes as well.
The architecture of UACP is given in Figure 2. It contains two components: (i) a feature extractor F , which outputs the 2 normalized feature vector, and (ii) a composite classifier CC composed of 2 × K neurons.In CC, a MC predictor w.r.t. the first K neurons is adopted to classify target samples to one of the K source classes.Further, totally K OVA predictors are further adopted to verify the prediction from MC predictor, and the OVA predictor for the k-th class is built on the k-th and (K + k)-th neurons.The memory bank holds features for all target samples currently.For a target sample x, p mc (x) and p ova (x) denote the predictions from MC and the corresponding OVA predictor, respectively.

Composite Classifier with MC and OVA predictors
In order to adaptively identify unknown target samples, a novel composite classifier with both MC and OVA predictors is designed in UACP.The MC predictor outputs a K-dimensional vector, in order to classify samples to one of the source classes.Further, there are also K OVA predictors represented as OV A = {ova 1 , ova 2 , • • • , ova K }, each OVA predictor related to one source class verifies the prediction by MC predictor.Let p mc = σ(M C(F (x))) ∈ R K denote the probability output vector for sample x by MC predictor, where σ is the softmax function, each dimension p k mc (x) describes the probability of x to the k-th class.p ova k = σ(ova k (F (x))) ∈ R 2 denotes the probability output for x by the k-th OVA predictor, in which p + ova k and p − ova k are the probabilities of x to the k-th (positive) and rest (negative) classes, respectively.
To obtain discriminative features among different categories, we minimize the cross-entropy loss with source supervision for MC predictor, where ce is the standard cross-entropy loss.
Due to the property that the OVA predictor does not enforce each sample to belong to only source classes, UACP adopts it to further verify the prediction by MC predictor.For each source class, an OVA predictor learns the decision boundary between the positive in-class and negative out-class categories, and the negative category actually includes both source and unknown classes.At the same time, UACP learns discriminative features among different categories by maximizing the distance between similar categories (Padhy et al., 2020;Saito & Saenko, 2021).Specifically, for each source sample x s i , the discrepancy between ova y s i and OVA predictor w.r.t. the closest negative class is further maximized, where j represents the closest negative class for x s i .Through minimizing the above loss, OVA predictors can not only identify in-class and out-class samples, but also separate similar classes with source supervision.

Prediction Paradox for Target Unknown Authentication
Since some target categories are absent in source domain, it is difficult to make accurate predictions for target samples directly with source classifier, especially for target unknowns.To the end, UACP adopts classifier paradox for adaptive unknown authentication, including a MC predictor to classify samples to one of the multiple source classes, and a corresponding OVA predictor to further verify whether the sample belongs to the predicted class or not.If the OVA predictor affirms the prediction by MC predictor, which means MC and OVA have consistent predictions, then it is confident to predict the sample to a known class in source domain.Otherwise, if there is verification failure by OVA predictor, or the MC and OVA predictors have paradoxical predictions, then the sample tends to belong to unknown class.Finally, for each sample x i , let k = argmax(p mc (x i )) denote the predicted class by MC predictor, then where C k and C unknown denote the k -th known class and unknown class, respectively.Further, we adopt an entropy-strengthened loss over target samples for MC predictor, in order to strengthen the consistency between MC and OVA predictors, and capture the low-density separation for target samples in classifier learning as well.Specifically, for a known target sample with MC prediction affirmed by OVA predictor, we further constrain a sharper probability distribution or more confident prediction in MC predictor, while for an unknown sample with prediction paradox, a more uniform distribution or less confident prediction is further expected.Finally, for each target sample x t i , the entropy-strengthened loss is expressed as, and where k is the predicted class for sample x t i by MC predictor, and m is the margin for selecting confident known and unknown samples.In particular, with our special design of composite classifier, the predictions from MC and OVA tend to be consistent due to their partial-shared parameters.It is noted that we adopt m to conduct constraint on confident target samples, in order to exclude the incorrect predictions from MC predictor.

Domain-invariant Classifier for Implicit Domain Alignment
Previous UniDA methods commonly reduce the domain shift by feature alignment for shared classes across domains, while a mis-identification of target unknowns further incurs mis-alignment of these classes.In UACP, an implicit domain alignment is conducted directly in the output space.Specifically, a domain-invariant classifier is trained such that samples across domains share the same decision boundary, though with exhibit different feature distributions.First, the source supervision is adopted for both feature extractor and classifier learning in Sect.3.2.Due to the lack of target ground-truth, we further propose to leverage the self-supervised knowledge from target data.Our idea is that nearby samples should be close to each other in feature space, so as to generate well-clustered features for target data.A memory bank is utilized as , where v i is the stored features vector for x i , and it is updated with the mini-batch features in each iteration.Then the similarity between feature f i = F (x i ) and stored feature v j with i = j is calculated as, where temperature τ determines the level of concentration (Hinton et al., 2015).p i,j actually describes the probability that feature f i is a neighbor of v j .
To enforce samples be close to its nearby neighbors, a self-supervised feature clustering loss is adopted for target samples as, Initialize memory bank V with target features.Sample a batch (S) from D s and (T ) from D t .

5:
Obtain extracted features f s = F (S) and f t = F (T ).

6:
Update features at the current positions in V with f t .

7:
Compute the similarities between f t and features in V using Eq. ( 6).

8:
Obtain probability outputs of CC: p mc and p ova .

9:
Compute L CE (S) and L SOV A (S) using source data.

10:
Compute L SF C (T ), L T OV A (T ), and L ESL (T ) using target data.

11:
Update θ F and θ CC by minimizing the overall loss in Eq. ( 9).
12: end for 13: end for Minimizing the above loss actually minimizes the entropy of each target sample's similarity distribution to other target samples, thus helps gather similar target samples together to form compact clusters, and separate target samples from different clusters in the feature space.
Further, the low-density separation for target data has already been enforced over MC predictor in Eq. ( 5).Besides, it is also conducted on OVA predictors to seek a domain-invariant classifier across domains.Specifically, we perform entropy minimization (Saito et al., 2019) for target samples over each OVA predictor by, The above loss is minimized to increase the prediction confidence of OVA predictors.In this way, the shared classes across domains are implicitly aligned, while the target unknowns are kept away from the known classes.

Overall Training Objective for UACP
The final learning objective of UACP can be formulated as, where α, β, and γ control the trade-off for each component in Eq. ( 9).In each iteration, the memory bank updates a batch target features and the network updates parameters θ F and θ CC .The algorithm description for UACP is given in Algorithm 1.

Experiments
We validate the proposed UACP mainly with two adaptation settings, open-set domain adaptation and universal domain adaptation.

Evaluation Metrics
To better evaluate the performance of UACP under both OSDA and UniDA scenarios, we utilize the HOS metric (Bucci et al., 2020) defined as the harmonic mean of average per-class accuracies over known and unknown samples, denoted by Acc kn and Acc unk , respectively.HOS is formulated as, It fairly considers performance on known and unknown data.Besides, instancewise accuracy on known classes (Acc) and area under the ROC curve (AU C) are also adopted in Sect.4.3.4,following the standard protocol of unknown detection (Hendrycks & Gimpel, 2016).

Implementation Details
We perform UACP in Pytorch (Paszke et al., 2017) framework.For fair comparisons, we implement our network based on ResNet-50 (He et al., 2016) pre-trained on ImageNet (Russakovsky et al., 2015).Following Saito et al. (2019), the last linear layer is replaced by a new linear classification layer.
The learning rates for the new linear layer and finetuned layers with inverse scheduling are set to 0.01 and 0.001, respectively.We exploit a mini-batch SGD optimizer with momentum 0.9 and weight decay 0.0005 in all experiments.The value of temperature τ is set to 0.05 following Ranjan et al. (2017).In UACP, trade-off parameters α, β, and γ are fixed, i.e., α=γ=0.05,and β=0.1.m is set to 0.4 for Office-31 and Office-Home, while 0.5 for VisDA and DomainNet.

Reults
In this section, we evaluate UACP by comparing with the state-of-the-arts.The bolded value in each column indicates the best performance of all methods.
The results over Office-31 and VisDA datasets are reported in Table 2, and Table 3 records the performance over Office-Home dataset.UACP achieves the best performance on 5 of the 6 tasks on Office-31.On average, it outperforms previous state-of-the-art method OVANet by 3.0%.As for VisDA dataset, UACP significantly outperforms OVANet by 16.3%, and outperforms the second-best method DCC by 1.7%.Note that DCC takes the prior of OSDA and UniDA settings into consideration, while our UACP has no prior knowledge of private classes.From Table 3, UACP achieves the best results on 7 of 12 tasks.On average, UACP surpasses all OSDA and UniDA methods.Notably, OSDA methods are designed for specific open-set scenario that relies on prior knowledge, while UACP is able to adapt to more general scenarios.
Table 4 shows the results over Office31 dataset under UniDA scenario.The proposed UACP outperforms all the baselines on 5 of the 6 tasks, yielding 4.5% improvement on average over previous state-of-the-art OVANet.The results on DomainNet and VisDA are recorded in Table 5, and performance on Office-Home is reported in Table 6.UACP achieves the best performance on two challenging benchmarks, i.e., 60.1% on VisDA and 74.7% on Office-Home, which has improvements of 7.0% and 2.9% over the second-best method, respectively.For DomainNet dataset, UACP outperforms the baselines on 2 of 6 tasks, and achieves second-best HOS of 50.3%, which is actually comparable to the best one.In terms of the comparison results with both OSDA and UniDA methods in different settings, it can be observed that our proposed UACP is able to properly tackle different levels of category shifts, thus performs well in different UDA scenarios.

Analysis
In this section, more analyses are provided to further investigate the effectiveness of UACP.summary, UACP yields consistent improvement on all tasks, demonstrating that it can effectively handle different levels of label shifts among domains.

Ablation study
In this sub-section, we verify the effectiveness of individual components of UACP.Specifically, ablation studies over the difficult task Pr→Re of Office-Home for both OSDA and UniDA settings are presented in Table 7. Four variants of UACP are studied: (i) "w/o L ESL + L SF C + L T OV A " is the variant that only trained with source supervision.(ii) "w/o L ESL " discards classifier paradox to identify unknown samples in Eq. ( 5).(iii) "w/o L SF C " discards selfsupervised feature clustering on target samples in Eq. ( 7).(iv) "w/o L T OV A " discards entropy minimization on OVA predictors in Eq. ( 8).From Table 7, each component of UACP contributes to the target performance.Specifically, the unknown authentication part L ESL is essential for unknown detection, it increases the Acc unk significantly from 56.4% to 72.9% under OSDA scenario, and from 60.4% to 75.2% under UniDA scenario.Besides, removing the L SF C greatly hurts the Acc kn .When employing entropy minimization of target samples on OVA predictors, the HOS is improved from 70.5% to 74.8% under OSDA scenario, and from 79.1% to 83.3% under UniDA scenario.

Convergence comparison
The performance of UACP in each iteration is presented in Figure 4, compared to state-of-the-art OVANet.We plot Acc kn , Acc unk and HOS w.r.t. the number of iterations on the task A→D of OSDA setting, and A→W of UniDA setting, respectively.As illustrated in Figure 4, UACP quickly converges within the first several hundreds of iterations and achieves better performance.Besides, our Acc kn , Acc unk and HOS have much less fluctuations than those of OVANet, demonstrating the stability and effectiveness of our proposal.

Hyper-parameter analysis
To illustrate the sensitivity of UACP to trade-off parameters α, β, and γ, we perform experiments on the tasks of D→A and Ar→Pr under UniDA scenario.

Conclusion
In this work, we propose a novel UniDA approach UACP to adaptively identify unknowns by classifier paradox.In UACP, a composite classifier is proposed to tackle both domain and category shifts.The composite classifier distinguishes different source categories using MC predictor, and captures the concept of "unknown" by verification from OVA predictors.Moreover, self-supervised knowledge is utilized to pursue well-clustered target features and low-density separation for target data, so as to conduct implicit domain alignment by domain-invariant classifier.

Fig. 1
Fig.1Illustration of adaptive unknown authentication by prediction paradox from MC predictor and the corresponding OVA predictor, if the predictions for sample x from MC and the OVA predictor are consistent, the predicted label (PL) is among the known source classes, otherwise, it is predicted as an "unknown" sample.

Figure 3 Fig. 3
Fig. 3 HOS w.r.t.varying |Ls − Lt| on tasks Ar→Re and Cl→Pr for both OSDA and UniDA scenarios

Fig. 4
Fig. 4 Comparison results of convergence speed and performance difference on UACP and OVANet

Fig. 5
Fig. 5 Sensitivity analysis for trade-off parameters over tasks (a) D→A of Office-31 and (b) Ar→Pr of Office-Home under UniDA scenario

•
Empirical comparisons with state-of-arts validate the proposed UACP in both open-set and universal UDA settings.
) feature extractor F parameterized by θ F ; Randomly initialized composite classifier CC parameterized by θ CC ; Number of iterations T in an epoch; Number of epochs E. Output: Optimal parameters θ F , θ CC .

Table 1
The division on label sets in each setting

Table 6
HOS (%) on Office-Home for UniDAPerformances by varying the number of unknown classes on 2 tasks (Ar→Re and Cl→Pr) of Office-Home in both OSDA and UniDA settings are presented in Figure3.We compare UACP with UniDA methods, including DANCE, CMU, DCC, and OVANet.Results under OSDA setting are illustrated in