Reversible Jump Attack to Textual Classifiers with Modification Reduction

Recent studies on adversarial examples expose vulnerabilities of natural language processing (NLP) models. Existing techniques for generating adversarial examples are typically driven by deterministic hierarchical rules that are agnostic to the optimal adversarial examples, a strategy that often results in adversarial samples with a suboptimal balance between magnitudes of changes and attack successes. To this end, in this research we propose two algorithms, Reversible Jump Attack (RJA) and Metropolis-Hasting Modification Reduction (MMR), to generate highly effective adversarial examples and to improve the imperceptibility of the examples, respectively. RJA utilizes a novel randomization mechanism to enlarge the search space and efficiently adapts to a number of perturbed words for adversarial examples. With these generated adversarial examples, MMR applies the Metropolis-Hasting sampler to enhance the imperceptibility of adversarial examples. Extensive experiments demonstrate that RJA-MMR outperforms current state-of-the-art methods in attack performance, imperceptibility, fluency and grammar correctness.


Introduction
NLP models are known to be vulnerable in various applications, including machine translation (Ni et al, 2022;Cheng et al, 2020;Tan et al, 2020), sentiment analysis (Zang et al, 2020;Yang et al, 2021), and text summarization (Cheng et al, 2020).Attackers can exploit these weaknesses, creating adversarial examples that compromise the performance of targeted NLP systems.This growing susceptibility presents significant security challenges for AI models.
Textual attacks on NLP models are classified into character (Iyyer et al, 2018a;Ribeiro et al, 2018), word (Alzantot et al, 2018;Jia et al, 2019), and sentence-level (Jia and Liang, 2017) attacks.Character-level attacks are easily countered due to noticeable misspellings (Ebrahimi et al, 2018), while sentencelevel attacks often yield complex, hard-to-read text (Gan and Ng, 2019).Wordlevel attacks are gaining preference for their effectiveness and subtlety, as they involve replacing words with carefully chosen substitutes (Zhang et al, 2020;Garg and Ramakrishnan, 2020;Li et al, 2020).Consequently, our focus is on conducting word-level adversarial attacks.
Crafting optimal adversarial examples involves navigating the interplay of successful attacks, controlled imperceptibility.The predominant strategies for this can be classified into optimization algorithms and hierarchical search methods.Within the realm of optimization, Genetic Attack (GA) (Alzantot et al, 2018;Jia et al, 2019) and Particle Swarm Optimization (PSO) (Zang et al, 2020) stand out as evolutionary approaches, focusing on optimizing attack effectiveness within embedding spaces and sememe-based thesauri, respectively.However, these methods face two primary challenges: 1) low efficiency in the optimization process due to the expansive search space, such as GloVe (Pennington et al, 2014), and 2) Compromised semantic integrity, as even synonym-based word substitutions can cause sentence-level semantics inconsistency.On the other hand, Hierarchical search crafts adversarial examples by orderly substituting words based on word saliency rank (WSR) (Ren et al, 2019;Li et al, 2021;Yang et al, 2021).It first identifies target words using WSR, then employs a Masked Language Model or thesaurus for substitutions.These hierarchical attacking methods have several drawbacks: 1) the first drawback of this approach is the difficulty of presetting the number of perturbed words (NPW) for large datasets with many tokens since the optimal NPW varies with different target texts (Michel et al, 2019); 2) the WSR-based methods will significantly reduce the searching domain by only attacking the combination of victim words ordered by the WSR.For a clear illustration, Fig 1 showcases the drawbacks of optimization-based GA and hierarchical PWWS attacks.GA's replacement of 'thriller' with 'science' sacrifices semantic quality, while PWWS, despite altering three words, fails to fool the classifier.
To address the above problems, we propose two novel black-box and word-level antagonistic algorithms: Reversible Jump Attacks (RJA) and MH Modification Reduction (MMR).For RJA, we employ the Reversible Jump sampler (RJS) and propose three variables from a target distribution: the number of perturbed words (NPW), victim words, and substitutions from Masked Fig. 1 An illustrating example to show attack performances of optimizing attack (genetic attack), PWWS attack, and the proposed method RJA-MMR, where label "0" represents negative sentiment and "1" represents positive sentiment.The substitutions for different attack methods are bold.Genetic attack sacrifices too much semantics by changing "thrillers" to "science", while PWWS fails to fool the model and makes many ineffective modifications.
The proposed method, RJA-MMR, makes a successful attack with only one word changed.
Language Models (MLM) and HowNet (Dong and Dong, 2003).The target distribution for RJS for evaluating the quality of the adversarial candidates is regularized by a strong penalty of semantic (dis)similarity.The NPW can be cross-dimensionally searched via RJS to adjust for different textual inputs according to their word saliency and overall performance.Given these three factors, adversarial candidates are only accepted based on an acceptance probability from RJS.By running such a process iteratively, we will obtain the successful candidates with the highest semantic similarity.Therefore, RJA efficiently searches threat-level attacks inside a domain larger than WSR without presetting an NPW and sacrificing much semantics for imperceptibility.
The other algorithm is Metropolis-Hasting Modification Reduction (MMR) which tends to restore the manipulations from RJA (i.e., reverse back to the original words) and then update the existing substitutions to maintain the attacking performance.Specifically, given an adversarial candidate, MMR first stochastically proposes a new candidate by restoring the attacked words.It applies a customized acceptance probability, calculated by comparing the overall performance between the new and current candidates, to determine the acceptance of the new candidate.After restoring some attacked words, MMR uses MH algorithm to update the substitutions of the current attacked words to preserve the attacking performance.By combining RJA and MMR, we proposed an integrated RJA-MMR as our final model.Specifically, RJA utilizes a Reverse Jump sampler (Green, 1995a), a Markov Chain Monte Carlo (MCMC) family member, to sample the dimensional jumping vectors to perform a cross-dimensional search for the optimal attacking performance constrained by semantic similarity.Intuitively, RJA and MMR agree on attacking performance improvement but disagree on NPW.By iteratively running these two antagonistic algorithms, attackers can boost the attack performance with only a small number of perturbations.The attack performance is illustrated by an example in Fig 1, where RJA-MMR outperforms the optimizing attack (Genetic attack) and hierarchical attack (PWWS).
Our main contributions from this work are as follows: The rest of this paper is structured as follows.We first review adversarial attacks for NLP models and the Markov Chain Monte Carlo methods in NLP in Section 2. Then we detail our proposed method in Section 3. We evaluate the performance of the proposed method through empirical analysis in Section 4. We conclude the paper with suggestions for future work in Section 5.

Related Work
This section reviews the literature on word-level textual attacks and MCMC sampling in NLP.

Word-level Attacks to Classifiers
An increasing amount of effort is devoted to generating better textual adversarial examples with various attack models.Character-level attacks (Liang et al, 2018;Ebrahimi et al, 2018) use misspellings to attach the victim classifiers; however, these attacks can often be defended by a spell checker.At the same time, sentence-level attacks (Iyyer et al, 2018b;Zou et al, 2020) pose threats to the classifier via inserting, removing, and paraphrasing sentences or pieces of sentences to the original input, while it's difficult for the generated text to maintain the imperceptibility (Li et al, 2021).Word-level attacks pose nontrivial threats to NLP models by locating important words and manipulating them for targeted or untargeted purposes.Such attacks are broadly regarded as the optimal unit of attacks (Jia and Liang, 2017).

Gradient-based Word-level Attacks
With the help of an adopted fast gradient sign method (FGSM) (Goodfellow et al, 2015), Papernot et al.(Papernot et al, 2016) were the first to generate word-level adversarial examples to classifiers.While their attack was able to fool the classifiers, their word-level manipulations significantly affected the original meaning.In Liang et al (2018), the authors proposed to attack the target model by inserting Hot Training Phrases (HTPs) and modifying or removing the Hot Sample Phrases (HSPs), where HTPs and HSPs are calculated based on the gradient with respect to words from the input.Similar to Liang, Samanta and Mehta (2018) utilizes the embedding gradient to determine the important words.Then hierarchical-driven rules together with hand-crafted word-level synonyms and character-level typos were designed.Notably, while the textual data is naturally discrete and more perceptible than image data, many gradient-based textual attacking methods inherited from computer vision are not effective enough, which leaves textual attack a challenging problem.

Non-gradient-based Word-level Attacks
Alzantot et al. (Alzantot et al, 2018) transferred the domain of adversarial attacks to an optimization problem by formulating a customized objective function.With genetic optimization, they generate the adversarial examples by sampling the qualified genetic 'son' generations that break out the encirclement of the semantic threshold.However, the genetic algorithm can be low efficient.Since word embedding space is sparse, performing natural selection for languages in such a space can be computationally expensive.Jia (Jia et al, 2019) proposed a faster version of Alzantot's adversarial attacks by shrinking the search space, which accelerates the process of evolving in genetic optimization.Although Jia has greatly reduced the computational expense of genetic-based optimization algorithms, the optimizing processes inside word embedding space, such as GloVe (Pennington et al, 2014) and Word2Vec (Mikolov et al, 2013), are still not efficient enough.To ease the searching process, embedding-based algorithms have to use a counter-fitting method to post-process attacker's vectors to accelerate the searching speed (Mrkšić et al, 2016).Compared with the word embedding method, utilizing well-organized linguistic thesaurus, e.g., synonym-based WordNet (Miller et al, 1990) and sememe-based HowNet (Dong and Dong, 2003), is a simple and easy implementation.Ren (Ren et al, 2019) sought synonyms based on WordNet synsets and ranked word replacement order via probability-weighted word saliency (PWWS).Zang (Zang et al, 2020) and Yang (Yang et al, 2021) both manifested that the sememe-based HowNet can provide more substitute words via Particle Swarm Optimization (PSO) and an adaptive monotonic heuristic search to determine which group of words should be attacked.In addition, some recent studies utilized masked language models (MLM), such as BERT (Devlin et al, 2019) and RoBERTa (Liu et al, 2019), to generate contextual perturbations (Li et al, 2020;Garg and Ramakrishnan, 2020).The pre-trained MLMs can ensure the predicted token correctly fits the sentence grammar but cannot preserve semantics.

Markov Chain Monte Carlo in NLP
Markov chain Monte Carlo (MCMC) (Metropolis et al, 1953a), a statistically generic method for approximate sampling from an arbitrary distribution, can be applied in a variety of fields, such as optimization (Rubinstein, 1999), machine learning (Fan et al, 2018), quantum simulation (Haase et al, 2021) and icing models (Herrmann, 1986).The main idea is to generate a Markov chain whose equilibrium distribution is equal to the target distribution (Kroese et al, 2011).There exist various algorithms for constructing chains, including the Gibbs sampler, Reversible Jump sampler (Green, 1995b), and Metropolis-Hasting (MH) algorithm (Metropolis et al, 1953a).To get models capable of reading, deciphering, and making sense of human languages, NLP researchers apply MCMC to many downstream tasks, such as text generation and sentimental analysis.For text generation, Kumagai (Kumagai et al, 2016) proposes a probabilistic text generation model which generates human-like text by inputting semantic syntax and some situational content.Since human-like text requests grammarly correct word alignment, they employed Monte Carlo Tree Search to optimize the structure of the generated text.In addition, Harrison (Harrison et al, 2017) presents the application of MCMC for generating a story, in which a summary of movies is produced by applying recurrent neural networks (RNNs) to summarize events and directing the MCMC search toward creating stories that satisfy genre expectations.For sentimental analysis, Kang (Kang and Ren, 2011) applies the Gibbs sampler to the Bayesian network, a network of connected hidden neurons under prior beliefs, to extract the latent emotions.Specifically, they apply the Hidden Markov models to a hierarchical Bayesian network and embed the emotional variables as the latent variable of the Hidden Markov model.

Metropolis-Hasting and Reversible Jump Samplers
The Metropolis-Hasting (MH) (Metropolis et al, 1953a) algorithm is a classical Markov chain Monte Carlo sampling approach.Given the stationary distribution f (z) and transition proposal q(z ′ |z), the MH algorithm can generate desirable examples from f (z).Specifically, at each iteration, a new state z ′ will be proposed given the current state z based on a transition function q(z ′ |z).The MH algorithm is based on a "trial-and-error" strategy by defining an acceptance probability α(z ′ |z) as following: to decide whether the new state z ′ is accepted or rejected.MCMC can also be applied to sample variational dimension sampling.Reversible Jump samplers (RJS) (Green, 1995a) is a variation of MCMC algorithms specifically designed to sample from target distributions that contain vectors with different dimensions.Due to such a property, RJS can be applied to variable selection (Fan and Sisson, 2011), dimension reduction (Rincent et al, 2017), and cross-dimensional optimization (Kroese et al, 2011).Unlike the MH algorithm, RJS requests an additional transition item for proposing the new dimensions.The formulation of the acceptance probability of RJS is below: where m denotes the dimensions of the vector z (m) , q z ′ (m ′ ) |z (m) in Eq. 3 illustrates the new transition function and p m ′ |z (m) is the dimensional transition item.Comparing the acceptance probabilities of MH (Eq1) and RJS (Eq2) reveals that RJS is more effective than MH in handling dimensional variations and sampling parameters of unknown dimensions.Since making adversarial would be a typical situation of dimension variation due to number of perturbed words (NPW), we believe that attacks based RJS is expected to achieve better performance than the literature based on MH (Zhang et al, 2019).

Adversarial Attack via MCMC
Despite the applications in NLP, the MCMC can be applied to adversarial attacks on NLP models.Zhang et al (2019) has successfully applied MH sampling to generate fluent adversarial examples for natural language by proposing gradient-guided word candidates.Specifically, they proposed both black-box and white-box attacks, and for black-box attacks, they perform removal, insertion and replacement by the words chosen from the pre-selector candidates set, but the empirical studies indicate these candidates are not efficient and effective for attacking.As for the white-box attacks, the gradient of the victim model is introduced to score the pre-selector candidates set, which successfully improves the attacking performance.However, the white-box setting is not practical in the real world, as attackers do not have access to the gradient and structure of the victim models.In addition, MHA successfully improved the language quality in terms of fluency, but the imperceptibility of the generated examples, especially in the modification rate, cannot be optimized.

Imperceptible Adversarial Attack via Markov Chain Monte Carlo
In this section, we will detail our proposed method, RJA-MMR, the Reversible Jump attacks (RJA) with Metropolis-Hasting Modification Reduction (MMR).

Problem Formulation and Notaition
Given a pre-trained text classification model, which maps from feature space X to a set of classes Y, an adversary aims to generate an adversarial document x *

D
A dataset to be attacked.
x = [w 1 , w 2 , . . ., wn] An input text with n words and w i is the ith word in the sequence.
x An adversarial candidate generated by RJA.
m, v, s Three factors in adversarial sample generation: the number of perturbed words, victim words, and their substitutions, respectively.

G
The set of substitution candidates.
x r The adversarial candidate generated in the restoring step of MMR.
x u The adversarial candidate generated in the updating step of MMR.
x * The final optima adversarial example.

I(w i )
The saliency of the word w i .

T
The total number of iterations for RJA-MMR.
The function measuring the semantic similarity.
from a legitimate document x ∈ X whose ground truth label is y ∈ Y, so that F (x * ) ̸ = y.The adversary also requires Sem(x, x * ) ≤ ϵ for a domain-specific semantic similarity function Sem(•) : X × X → (0, 1), where the bound ϵ ∈ R helps to ensure imperceptibility.In other words, in the context of text classification tasks, we use Sem(x, x * ) to capture the semantic similarity between x and x * .More details of the notation are illustrated in Table 1.

Reversible Jump Attack
This section details our proposed Reversible Jump Attack (RJA) which generates adversarial examples under semantic regularisation.Let D = {(x 1 , y 1 ), (x 2 , y 2 ), . . ., (x N , y N )} denote a dataset with N data samples, where x and y are the input text and its corresponding class.Given the input text

INPUT: OUTPUT:
Acceptance probability using Eq. 10 Acceptance probability using Eq.14 Acceptance probability using Eq. 13 Word saliency Fig. 2 The workflow of our RJA-MMR.In this example, HAA generates an adversarial example with one word perturbed to attack a sentimental classifier with two labels (positive and negative).The block 1 ○ shows the calculation of word saliency.After obtaining the word saliency, we perform RJA in block 2 ○ which reflects the lines 4-15 in Algorithm 1.After RJA, we perform the two steps, restoring and updating MMR in block 3 ○ and 4 ○, respectively.The block 3 ○ and 4 ○ are illustrated in lines 4-10 and lines 11-18 in Algorithm 2, respectively.
x = [w 1 , . . ., w i , . . ., w n ] with n words, we denote an adversarial candidate of RJA as x and denote the final chosen adversarial example as x * .RJA, unlike traditional methods, treats the number of perturbed words (NPW) as a variable in the sampling process, not a preset value.Utilizing the Reversible Jump Sampler, RJA conditionally samples NPW, victim words, and their substitutions.The approach involves a transition function that proposes adversarial candidates, evaluated against a target distribution focusing on attack effectiveness and semantic similarity (Eq.2).This process iteratively refines the adversarial examples, guided by an acceptance probability mechanism.This section first presents the transition function (Section 3.2.1)and then elaborates on the acceptance probability (Section 3.2.2),which builds upon the transition function.

Transition Function
To propose the adversarial candidates, we construct our transition function to sequentially propose the three compulsory factors of crafting a new adversarial candidate x t+1 given the current one x t : the NPW m, the victim words v = [v 1 , . . ., v m ], and the corresponding substitutions s = [s 1 , . . ., s m ], where the dimension of v and s is m.Before we detail the process of proposing these factors, we first introduce the concept of the word saliency.In this context, word saliency refers to the impact of the word w i on the output of the classifier and the transition function, if this word is deleted from the sentence.The word with a high saliency has a high impact on the classifier.Thus, associating more importance to high-saliency words can help the transition function efficiently propose a high-quality adversarial candidate.To calculate the word saliency, we use the changes of victim classifiers' logits before and after deleting word w i to represent the saliency I(w i ): where F logit (•) is the classifier returning the logit of the correct class, and is the text with w i removed.We calculate the word saliency I(w i ) for all w i ∈ x to obtain word saliency I(x).Calculating the word saliency is illustrated in Block 1 ○ of Fig 2. Among the iterations of searching for victim words, assume the RJA adversarial candidate at iteration t is x t = (m t , v t , s t ) and the new adversarial candidate to be crafted is x t+1 = (m t+1 , v t+1 , s t+1 ), we propose the first factor, the NPW value m t+1 , by either adding or subtracting 1, i.e., m t+1 ∈ {m t + 1, m t − 1}.This set {m t + 1, m t − 1} does not need to include m t because if the proposed state is rejected, m t+1 will be retained as m t , which means m t still remains as a possible state.Thus the transition function for the new NPW value m t+1 can be formulated as a probability mass function as below: where Such a transition function can propose the new state m t+1 ∈ {m t − 1, m t + 1} by referring to the proportion of the exponential on victim word saliency l 1 and unattacked word saliency l 2 overall word saliency exponential.Intuitively, if the saliency values of all attacked words are high, the probability of proposing to reduce one attacked word, m t+1 = m t −1, is high, and vice versa.Concretely, to sample m t+1 from such a transition function, we firstly draw a random number, η ∼ U nif (0, 1); and if η is less than the probability of sampling m t+1 = m t − 1, i.e., η < exp(l1) exp(l1)+exp(l2) , then m t+1 = m t − 1, otherwise m t+1 = m t + 1.Unlike hierarchical attacks, which deterministically perturb the words in the descending order of the word saliency, randomization is applied because of its two merits: 1) it overcomes the imprecision problem with the WSR (word saliency rank) mentioned in the preceding introduction section, and 2) it enlarges the search domain by proposing more combinations of attacked words than those in hierarchical searching.
After determining the number of perturbed words, we sample one target victim word v tgt (where "tgt" refers to "target") to be manipulated according to the newly sampled m t+1 .Specifically, for m t+1 = m t +1, the target word v tgt is uniformly sampled from unattacked word set x\v t , while for m t+1 = m t − 1 the target word v tgt is uniformly drawn from attacked words set v t then the selected words will be restored to the original words.The transition function of sampling the target victim word v tgt is thus formulated as: After the target word v tgt ∈ x t is selected, we search for a parsing-fluent and semantic-preserving substitution for w tgt .Therefore, we uniformly draw a substitution s tgt for v tgt from the candidates set, which is the intersection (consensus) of candidates provided by Mask Language Models (MLMs) and Synonyms.Specifically, let M denote the MLM, and we mask the v tgt in x to construct a masked x mask and feed the masked text into M to search for the parsing-fluent candidates.Instead of using the argmax prediction, we take the most possible K words, which are the top K words suggested by the logits from M, to construct MLM candidates set G M = {w 1 M , . . ., w K M }.To keep semantically similar, we form a synonym set G syn = {w 1 syn , . . ., w K syn } from HowNet (Dong et al, 2010) based thesauri such as OpenHowNet (Qi et al, 2019) and BabelNet (Qi et al, 2020) These thesauri are context-aware and at the same time can provide more synonyms than common thesaurus such as WordNet (Miller, 1992).Since our objective is that the generated adversarial examples should be parsing-fluent and semantic-preserving, the substitution s tgt will be uniformly sampled from the intersection G = G M ∩ G syn , which is illustrated in Eq. 7.
where G = G M ∩ G syn and [G] is the cardinality of the set G.
By applying the Bayes rule to the Eq. 5, 6 and 7, the final transition function is:

Acceptance Probability for RJA
Before we calculating the acceptance probability, we need to construct the target distribution for evaluating the performance.Specifically, we argue that a good adversarial example should achieve successful attacks while being kept semantically similar to the input text x.Therefore, we formulate the following equation as our target distribution: where Sem(x, x) represents the semantic similarity, which generally is implemented with the cosine similarity between sentence encodings from a pre-trained sentence encoder, such as USE (Cer et al, 2018).
) is a positive normalizing factor to make x∈X π(x) = 1 and F p (•) : X → (0, 1) denotes the confidence of making right predictions where X represents text space.From Eq. 9, we can easily observe that the value from target distribution π(x) will increase with the increase of the attacking performance measured by the confidence of making a wrong prediction 1 − F p (x), and semantic similarity Sem(x, x).
Given the target distribution in Eq. 9 and transition function in Eq. 8, we formulate the acceptance probability for RJA, α RJA (x t+1 |x t ), as follows: After calculating α(x t+1 |x t ), we sample a random number ϵ from a uniform distribution, ϵ ∼ U nif orm(0, 1), if ϵ < α(x t+1 |x t ) we will accept x t+1 as the new state, otherwise the state will remain as x t .By running T iterations, we obtain a set of adversarial candidates {x 1 , x 2 , . . .x T }.We then choose the candidate which not only successfully fools the classifier but also preserves the most semantics as the final adversarial candidate x.The process of RJA is illustrated in Algorithm 1 and block 2 ○ in Fig 2.

Modification Reduction with Metropolis-Hasting Algorithm
Besides the success of tampering with the classifier and semantic preservation, the modification rate is also an important factor in evaluating the imperceptibility of adversarial examples.Generally, methods in the literature can generate effective adversarial examples; however, it was hard to guarantee the modification rate is optimally the lowest.To address this, we exploring efficient yet minimal substitution combinations for a given adversarial candidate.MMR involves two steps, each employing the MH algorithm: 1) stochastically restoring some attacked words to create a less modified candidate and 2) updating all substitutions without altering the NPW, m.These steps are detailed in Sections 3.3.1 and 3.3.2respectively.

Restoring Attacked Words with MMR
The first step of MMR is probabilistically restoring some attacked words with MH algorithm to test the necessity of the current substitutions.Given an adversarial candidate x t = (m t , v t , s t ) from iteration t in RJA, we aim to generate an adversarial candidate x r t which is constructed by restoring some attacked words in x t .To sample the restored substitutions, we propose the probability mass function of selecting substitutions s r ∈ {s i , w i } in iteration t as follows: where s r = s i denotes to continue the attack and s r = w i denotes restoring the substitution to the original word w i , respectively.The x r t is the proposed adversarial candidate with selected substitutions restored from x.With such a probability mass function, the s r can be sampled by the same strategy of sampling as in Eq. 5. To further investigate the quality of such a candidate, we apply the target distribution, π(x), in Eq. 9 to construct the following acceptance probability: to decide whether the proposed adversarial candidate x r t should be accepted as the true candidate.

Updating the Combination of Substitutions with MMR
Having restored selected substitutions to obtain the adversarial candidate x r t at the t-th iteration, we proceed to the second step: MMR updating.This step is designed to refine attack performance by altering substitution combinations without affecting the NPW, m t .We apply a methodology similar to the one in Eq. 7 for sampling substitution combinations.In essence, the MMR updating utilizes the candidate proposing function (Eq.7) to explore alternative substitutions for each attacked word, aiming for enhanced attack efficacy.The formulation for this update, leading to the next adversarial candidate x u t , is governed by the subsequent acceptance probability: where p(s i |w i , m r t , x r t ) is identical to that in Eq. 7. By iteratively running T times MH algorithms for substitution restoring and updating with acceptance probabilities in Eq. 13 and Eq.14, respectively, we can construct the adversarial set X ′ = {x u t } T t=1 and select the candidate with the highest semantic similarity among the successful candidates that fools the classifier as the final adversarial example x * .This proposed MMR algorithm will not only be applied to our RJA algorithm but also can help other attack methodologies reduce their modifications.The whole process of MMR is illustrated in Algorithm 2 and block 3

Experiments and Analysis
In this section, we comprehensively evaluation the performance of our method against the current state of the art.Besides the main results (Sec.4.4) of attacking performance and imperceptibility, we also conduct experiments on ablation studies (Sec.4.5), efficiency analysis (Sec.4.6), transferability (Sec.4.7), target attacks (Sec.4.8), performance front of defense mechanism (Sec.4.9), adversarial retraining (Sec.4.10), part-of-speech (POS) preference (Sec.4.11) and scales of models for robustness(Sec.4.12) We evaluate the effectiveness our methods on three widely-used and publicly available benchmark datasets: AG's News (Zhang et al, 2015), Emotion To ensure reproducibility, we provide the code and data used in our experiments in a GitHub repository1 .

Victim Models
We apply our attack algorithm to two types of popular and well-performed victim models.The details of the models can be found below.

BERT-based Classifiers
To do convincing experiments, we choose three well-performed and popular BERT-based models, which we call BERT-C models (where the letter "C" represents "classifier"), pre-trained by Huggingface 2 .Due to the different sizes of the datasets, the structures of BERT-based classifiers are adjusted accordingly.The BERT classifier for AG's News is structured by the Distil-RoBERTa-base (Sanh et al, 2019) connected with two fully connected layers, and it is trained for 10 epochs with a learning rate of 0.0001.For the Emotion dataset, its BERT-C adopts another version of BERT, Distil-BERT-base-uncased (Sanh et al, 2019), and the training hyper-parameters remain the same as BERT-C for AG's News.Since the SST2 dataset is relatively small compared with the other two models, the corresponding BERT classifier utilizes a small-size version of BERT, BERT-base-uncased (Devlin et al, 2019).As for the IMDB, we employ the Distil-BERT-base-uncased for classification tasks.The test accuracy of these BERT-based classifiers before they are under attacks are listed in Table 2 and these models are publicly accessible3 4 5 6 .

TextCNN-based models
The other type of victim model is TextCNN (Kim, 2014), structured with a 100-dimension embedding layer followed by a 128-units long short-term memory layer.This classifier is trained 10 epochs by ADAM optimizer with parameters: learning rate lr = 0.005, the two coefficients used for computing running averages of gradient and its square are set to be 0.9 and 0.999 (β 1 = 0.9, β 2 = 0.999), the denominator to improve numerical stability σ = 10 −5 .The accuracy of these TextCNN-base models is also shown in Table 2.

Baselines
To evaluate the attacking performance, we use the TextAttack (Morris et al, 2020) framework to deploy the following baselines: • AGA (Alzantot et al, 2018): it uses the combination of restrictions on word embedding distance and language model prediction scores to reduce search space.As for the searching algorithm, it adopts a genetic algorithm, a popular population-based evolutionary algorithm.• Faster Alzantot Genetic Algorithm (FAGA) (Jia et al, 2019) (Miller et al, 1990) and sorts word attack order by multiplying the word saliency and probability variation.• TextFooler (TF) (Jin et al, 2020): it ranks the important words with similar strategy with Eq. 4. With the important rank, the attacker prioritizes replacing them with the most semantically similar and grammatically correct words until the prediction is altered.
• Particle Swarm Optimization (PSO) (Zang et al, 2020): it selects word candidates from HowNet and employs the POS to find adversarial text.This method treats every sample as a particle whose location in the search space needs to be optimized.

Experimental Settings and Evaluation Metrics
For our RJA and RJA-MMR, we use the Universal Sentence Encoder (USE) (Cer et al, 2018) to measure the sentence semantic similarity for target distribution in Eq. 9. We experiment to find k = 30 substitution candidates and to find these candidates' substitutions, we use RoBERTa-large (Liu et al, 2019) as the MLM with WordPiece (Wu et al, 2016) tokenizer for contextual infilling and utilize OpenHowNet (Qi et al, 2019) with NLTK (Bird et al, 2009) tokenizer as the synonym thesaurus.For the sampling-based algorithms, MHA and the proposed methods (RJA, RJA-MMA), we set the maximum number of iterations T to 1000.We argue that the quality of adversarial examples is appraised with regard to three key facets: attacking performance, imperceptibility, and fluency.To measure these facets, we use the following five metrics to measure the performance of adversarial attacks: • Successful attack rate (SAR) is defined as the percentage of attacks where the adversarial examples make the victim models predict a wrong label.• Modification Rate(Mod) is the percentage of modified tokens.Each replacement, insertion or removal action accounts for one modified token.• Grammar Error (GErr) is measured by the absolute rate of increased grammatic errors in the successful adversarial examples, compared to the original text, where we use LanguageTool (Naber et al, 2003) to obtain the number of grammatical errors.• Perplexity (PPL) denotes a metric used to evaluate the fluency of adversarial examples (Kann et al, 2018;Zang et al, 2020).The perplexity is calculated using small-sized GPT-2 with a 50k-sized vocabulary (Radford et al, 2019).• Textual similarity (Sim) is measured by the cosine similarity between the sentence embeddings of the input and that of the adversarial sample.We encoded the two sentences with the universal sentence encoder (USE) (Cer et al, 2018).
SAR evaluates attack performance, while Mod and Sim measure imperceptibility.GErr and PPL assess language fluency.

Experimental Results and Analysis
The main experimental results of the attacking performance (SAR), the imperceptibility performance (Sim, Mod) and the fluency of adversarial examples (PPL, GErr) are listed in Table 3 and 4.Moreover, we demonstrate ial examples crafted by various methods shown in Table 5.We manifest the three contributions mentioned in the Introduction section by answering three research questions: Does our method make more thrilling attacks compared with baselines?
We compare the attacking performance of the proposed method RJA-MMR and baselines in Table 3.This table demonstrates that RJA-MMR consistently outperforms other competing methods across different data domains, regardless of the structure of classifiers.Further, even RJA, by itself, without using MMR, can craft more menacing adversarial examples than most baselines.We attribute such an outstanding attacking performance to the two prevailing aspects of RJA.Firstly, RJA optimizes the performance by stochastically searching the domain.Most of the baselines perform a deterministic searching algorithm which could get stuck in the local optima.Differently, such a stochastic mechanism helps skip the local optima and further maximize the attacking performance.Secondly, some of the baselines strictly attack the victim words in the order of word saliency rank (WSR), where the domain of the hierarchical search is limited to combinations of the neighboring victim words from the WSR, which would miss the potential optimal victim words combination.Unlike these methods, the RJA would enlarge the searching domain by testing more combinations of substitutions that do not follow the WSR order.Thus, the proposed method RJA achieves the best-attacking performance, with the highest successful attack rate (SAR).
Is RJA-MMR superior to the baselines in terms of imperceptibility?
We evaluate the imperceptibility of different attack strategies in terms of semantic similarities (USE) and modification rate (Mod) between the original input text and its derived adversarial examples, shown in Table 3.It can be seen that the proposed RJA-MMR attains the best performance among the baselines.The outstanding performance of the proposed method is attributed to the mechanisms of RJA and MMR.For semantic preservation, we statistically design the target distribution (Eq.9) with a strong regularization of the semantic similarity in each iteration.Moreover, the HowNet is a knowledgegraph-based thesaurus that provides part-of-speech (POS) aware substitutions.Compared with the candidates supplied by baselines, the synonyms from HowNet can be more semantically similar to the original words.As for the modification rate, the proposed MMR is mainly designed for restoring the attacked words from successful adversarial examples so that the proposed RJA-MMR perturbs fewer words without sacrificing the attacking performance.Thus we can conclude that the proposed RJA-MMR provides the best performance for imperceptibility among baselines.
Is the quality of adversarial examples generated by the proposed methods better than that crafted by the baselines?
We insist the qualified adversarial examples should be parsing-fluent and grammarly correct.From the table 4, we can find the RJA-MMR provides the lowest perplexity (PPL), which means the examples generated by RJA-MMR are more likely to appear in the corpus of evaluation.As our corpus is long enough and the evaluation model is broadly used, it indicates these examples are more likely to appear in natural language space, thus eventually leading to better fluency.For the grammar errors, the proposed method RJA-MMR is substantially better than the other baselines, which indicates a better quality of the adversarial examples.We attribute such performance to our method of finding word substitution, constructing the candidates set by intersecting the candidates from HowNet and MLM.

Ablation Study
To rigorously validate the efficacy of the proposed RJA-MMR method, this section conducts a detailed ablation study, dissecting each component to assess its individual impact and overall contribution to the method's performance.

Effectiveness of RJA
We compare the attacking performance of our Reversible Jump Attack methods (RJA, RJA-MMR) and baselines in Table 3, reflected by SAR.The RJA helps attackers achieve the best attacking performance, with the largest metric SAR across the different downstream tasks.Apart from RJA-MMR, its ablation RJA also surpasses the strong baselines in most cases.Therefore, RJA is effective in terms of attacking performance.

Effectiveness of MMR
MMR is a stochastic mechanism to reduce the modifications of adversarial examples with attacking performance preserved.Besides RJA-MMR, we also apply MMR to different attacking algorithms, including PSO, TF, PWWS, BA and MHA, aiming to demonstrate the advantages of MMR in general.
From Table 3, we can find RJA-MMR has superior performance to RJA with lower modification rates.Moreover, the other baseline analysis results are shown in Fig 3 .It shows that the attacking algorithms with MMR consistently have a lower modification rate than those without MMR.This means that attacking strategies can generally benefit from MMR by making fewer modifications.

Performance versus the Number of Iterations
The performance of the proposed methods is influenced by the number of iterations, denoted as T .To delve deeper into this relationship, we conducted an extensive ablation study examining the correlation between performance and T .Insights drawn from Figure 4 reveal a positive trend where performance  amplifies in tandem with the number of iterations.Notably, performance begins to plateau, indicating convergence, at T = 100.

Effectiveness of the Word Candidates
In our ablation study, detailed in Table 6, we explored the effectiveness of various word candidate selection methods on the performance of RJA-MMR against the TextCNN model, utilizing the AG News dataset.Our evaluation included three strategies: using HowNet, MLMs with BERT-base (Devlin et al, 2019), RoBERTa-large (Liu et al, 2019), and a synergistic approach combining HowNet and MLMs.Individually, HowNet and the MLM approaches showed notable performance, with RoBERTa-large slightly outperforming BERT-base.However, the combination of HowNet and MLMs produced superior results, surpassing the individual methods in all evaluated metrics, highlighting the significant advantage of integrating HowNet with MLMs to enhance the effectiveness of adversarial attacks.Furthermore, our analysis of combination strategies for generating word candidates revealed that the more sophisticated MLM, RoBERTa-large, yielded a more effective attack performance than its less advanced counterpart, BERT-base.This finding suggests a positive correlation between advancements in MLM technology and enhancements in attack efficacy.We attribute this trend to the ability of more advanced MLMs to generate more relevant and suitable word candidates for use in attack methodologies, thereby increasing the precision and effectiveness of adversarial strategies.

Platform and Efficiency Analysis
In this section, we aim to evaluate the efficiency from both empirical and theoretical perspectives.To perform the empirical complexity (EV) evaluation, we carry out all experiments on RHEL 7.9 with the following specification: Intel(R) Xeon(R) Gold 6238R 2.2GHz 28 cores (26 cores enabled) 38.5MB L3 Cache (Max Turbo Freq.4.0GHz, Min 3.0GHz) CPU, NVIDIA Quadro RTX 5000 (3072 Cores, 384 Tensor Cores, 16GB Memory) (GPU), and 88GB RAM.Table 7 lists the time consumed for attacking BERT and TextCNN classifiers on the Emotion dataset.The metric of time efficiency is second per example, which means a lower metric indicates better efficiency.Results from Table 7 show that our RJA and RJA-MMR run longer than some static counterparts (PWWS, BAE, TF) but are more efficient than the others, such as PSO, FAGA, MHA and BA.Nonetheless, the results of our methods running longer than some baseline methods indicate the genuine time needed to look for the more optimal adversarial examples.
To theoretically gauge convergence speed, researchers employ the probabilistic concept of Mixing Time (MT), which denotes the duration for a Markov chain to approach its steady-state distribution closely (Kroese et al, 2011).Given that MT is constrained by the total variation distance (TV) between the proposed and target distributions, TV is frequently used as a metric to quantify both the mixing time and speed of convergence (Metropolis et al, 1953b;Green, 1995a).Analysis of Table 7 reveals that the proposed RJA-MMR method registers the lowest Total Variance (TV) distance, indicating superior theoretical performance in terms of convergence speed compared to other methods.

Transferability
The transferability of adversarial examples refers to its ability to degrade the performance of other models to a certain extent when the examples are generated on a specific classifier (Goodfellow et al, 2015).To evaluate the transferability, we investigate further by exchanging the adversarial examples generated on BERT-C and TextCNN and the results are shown in Fig 5.
When the adversarial examples generated by our methods are transferred to attack BERT-C and TexCNN, we can find that the attacking performance of RJA-MMR still achieves more than 80% successful rate, which is the best among baselines as illustrated in the Fig 5 .Apart from RJA-MMR, its ablated components RJA also surpass the most baselines.This suggests that the transferring attacking performance of the proposed methods consistently outperforms the baselines.

Targeted Attacks
A targeted attack is to attack the data sample with class y in a way that the sample will be misclassified as a specified target class y ′ but not other classes by the victim classifier.RJA and MMR can be easily adapted to targeted attack by modifying 1 − F y (x) to F y ′ (x) in Eq. 9.The targeted attack experiments are conducted on the Emotion dataset.The results are shown in Table 8, which demonstrates that the proposed RJA-MMR achieves better performance than PWWS, in terms of attacking performance (SAR), imperceptibility performance (Mod, Sim) and sentence fluency (GErr, PPL).

Attacking Models with Defense Mechanism
Defending against textual adversarial attacks is paramount in ensuring the integrity and security of machine learning models used in natural language processing applications.Effective defense mechanisms encompass two multifaceted approaches that include: 1) robust model training, utilizing adversarial To ensure a thorough evaluation of our proposed attack methods, we've integrated two distinct defense mechanisms into our assessment.For passive defense, we adopted the Frequency-Guided Word Substitutions (FGWS) (Mozes et al, 2021) approach, which excels at identifying adversarial examples.Conversely, for active defense, we incorporated Random Masking Training (RanMASK) (Zeng et al, 2023), a technique that bolsters model resilience via specialized training routines.We perform the adversarial attack to the BERT-C on the two datasets IMDB and SST2, and the results are presented in Table 9.The results show that our method outperforms the baselines.

Adversarial Retraining
This section explores RJA-MMR's potential in improving downstream models' accuracy and robustness.Following (Li et al, 2021)

Can adversarial retraining help achieve better test accuracy?
As shown in Fig. 6, when the training data is accessible, adversarial training gradually increases the test accuracy while the proportions of adversarial data are smaller than roughly 30%.Based on our results, we can see that a certain amount of adversarial data can help improve the models' accuracy, but too much such data will degrade the performance.This means that the right amount of adversarial data will need to be determined empirically, which matches the conclusions made from previous research (Jia et al, 2019;Yang et al, 2021).

Does adversarial retraining help the models defend against adversarial attacks?
To evaluate this, we use RJA-MMR to attack the classifiers trained with different proportions (0%, 10%, 20%, 30%, 40%) of adversarial examples.A higher success rate (SAR) indicates a victim classifier is more vulnerable to adversarial attacks.As shown in Fig 7, adversarial training helps to decrease the attack success rate by more than 10% for the BERT classifier (BERT-C) and 5% for TextCNN.These results suggest that the proposed RJA-MMR can be used to improve downstream models' robustness by joining its generated adversarial examples to the training set.

Parts of Speech Preference
Regarding the superiority of the proposed method in attacking performance, we investigate its attacking preference, described by parts of speech (POS), for further linguistic analysis.In this subsection, we break down the attacked words in AG's News dataset by part-of-speech tags with Stanford PSO tagger (Toutanova et al, 2003), and the collected statistics are shown in Table 10.By analyzing the results, we expect to find the more vulnerable POS by comparing the proposed methods and baselines.
We apply PSO tagger to annotate them with POS tags, including noun, verb, adjective (Adj.), adverb (Adv.) and others (i.e., pronoun preposition, conjunction, etc.).Statistical results in Table 10 demonstrate that all the attacking methods heavily focus on the noun.Presumably, in the topic classification task, the prediction heavily depends on noun.However, the proposed attacking strategies (RJA and RJA-MMR) tend to take a more significant proportion of others than any other methods; thus we might conclude that Others (pronoun, preposition and conjunction) might be the second adversarially vulnerable.Since these tags (pronouns, prepositions and conjunction) do not carry much semantics, we think these tags will not linguistically and semantically affect prediction but possibly impact the sequential dependencies, which could contaminate the contextual understanding of the classifiers and then subsequently cause wrong predictions.

Robustness versus the Scale of Pre-trained Models
Examining Tables 3 and 4, a question arises: Does increasing the scale of a model enhance its robustness?To explore this, we conducted a study applying our proposed attack methods to victim models of varying sizes on the Emotion dataset.
To provide a more nuanced analysis, we recognize that limiting our comparison to the two initial versions of BERT-base and large as introduced by (Devlin et al, 2019)-does not sufficiently support robust experimental outcomes.Hence, we have incorporated several widely recognized versions published subsequent to the original BERT paper.Specifically, we analyzed four versions of BERT as documented in Turc et al (2019): BERT Tiny7 , BERT Mini8 , BERT Small9 , and BERT Medium10 .Notably, the most downloaded version among these has reached up to 6,559,486 monthly downloads on Huggingface alone.Our findings, detailed in Table 11, demonstrate a positive correlation between model size and experimental robustness, confirming the value of incorporating a diverse range of model sizes into our analysis.

Conclusion and Future Work
In recent years, the safety and fairness of NLP models have greatly been threatened by adversarial attacks.Many researchers have raised concerns about the robustness of the NLP classifiers because of their broad downstream tasks, such as fake news detection, sentiment analysis, and email spam detection.To improve classifiers' robustness, we have presented RJA-MMR which consists of two algorithms, Reversible Jump Attack (RJA) and Metropolish-Hasting Modification Reduction (MMR).RJA poses threatening attacks to NLP classifiers by applying the Reversible Jump algorithm to adaptively sample the number of perturbed words, victim words and their substitutions for individual textual input.While MMR is a customized algorithm to help improve the imperceptibility, especially to lower the modification rate, by utilizing the Metropolis-Hasting algorithm to restore the attacked words without affecting attacking performance.Experiments demonstrate that RJA-MMR delivers the best attack success, imperceptibility and sentence fluency among strong baselines.
Although the adversarial examples can threaten the NLP models, these examples are not bugs but features (Ilyas et al, 2019).To protect the models from the attacks, we conduct extensive experiments with a defense strategy, adversarial retraining, which is done by joining the adversarial examples in the training set and then retraining the models with the newly constructed training set.Unsurprisingly, in our experiments, the robustness of the classifiers has been greatly improved, while the accuracy of these models on clean data drops when an excessive amount of adversarial examples are injected.
Since the adversarial attack is one of the most effective methods to test the robustness of a model, the proposed attacks raise some concerns about deep neural networks (DNNs) and large pre-trained models.As DNNs and pretrained language models achieved great success, most existing well-performed NLP classifiers are based on these techniques.Such popularity of these techniques could put textual classifiers at high risk because attackers can make effective attacks by utilizing DNNs and large pre-trained models.Thus a safer way of applying these techniques is a promising future research direction.At the same time, we also plan to pertinently study and design defense strategies to further improve the robustness of NLP classifiers under future adversarial attacks.
• Consent to participate: The authors give their consent to participate.
• Consent for publication: The authors give their consent to the publication of all information in this paper.

Fig. 3
Fig. 3 Comparisons on modification rates among attacking strategies (PSO, TF, PWWS, BA, MHA) with MMR and without MMR to attack the BERT-C on AG News dataset.

Fig. 4
Fig.4The progression of SAR, SIM, Mod, GErr, and PPL metrics for SST2 BERT over increased iterations (T).Performance trends and convergence points are visually represented.

Fig. 5
Fig. 5 Performance of transfer attacks to victim models (BERT-C and TextCNN) on Emotion.A lower accuracy of the victim models indicates a higher transfer ability (i.e., the lower, the better).
, we use RJA-MMR to generate adversarial examples from AG's News training instances and include them as additional training data.We inject different proportions of adversarial examples into the training data for the settings of a BERT-based MLP classifier and a TextCNN classifier without any pre-trained embedding.We provide adversarial retraining analysis by answering the following two questions:

Fig. 6
Fig. 6 Results of adversarially trained BERT and TextCNN by inserting the different numbers of adversarial examples to the training set.The accuracy is based on the performance of the SST2 test set.

Fig. 7
Fig. 7 The success attack rate (SAR) of adversarially retrained models with different numbers of adversarial examples.A lower SAR indicates a victim classifier is more robust to adversarial attacks.

•
We design a highly effective adversarial attack method, Reversible Jump Attack (RJA), which utilizes the Reversible Jump algorithm to generate adversarial examples with an adaptive number of perturbed words.The algorithm enables our attack method to have an enlarged search domain by jumping across the dimensions.• We propose Metropolis-Hasting Modification Reduction (MMR), which applies Metropolis-Hasting (MH) algorithm to construct an acceptance probability and use it to restore the attacked victim words to improve the imperceptibility with attacking performance reserved.MMR is functional with RJA and empirically proven effective in the adversarial examples generated by other attacking algorithms.• We evaluate our attack method on real-world public datasets.Our results show that methods achieved the best performance in terms of attack performance, imperceptibility and examples' fluency.

Table 1
List of notations used in this research.
Algorithm 2: Metropolis-Hasting Modification Reduction (MMR) Input: Adversarial candidate x = (m, v, s) Output: The final adversarial example x * 1 Adv set = [ ] 2 for t in range(T ) do 19 Adv set = [Adv set, x u t ] 20 return Adv set 21 Choose the candidate with the least modification from Adv set as the final adversarial example x * .22 return The final adversarial example x *

Table 2
Datasets and accuracy of victim models before attacks.
(Maas et al, 2011)3)), SST2(Socher et al, 2013)and IMDB(Maas et al, 2011).Specifically, AG's News is a news classification dataset with 127,600 samples belonging to 4 topic classes, World, Sports, Business, Sci/Tech.Emotion (Saravia et al, 2018) is a dataset with 20,000 samples and 6 classes, sadness, joy, love, anger, fear, surprise.SST2(Socher et al, 2013) is a binary class (positive and negative) topic dataset with 9,613 samples.The IMDB dataset(Maas et al, 2011), comprising movie reviews from the Internet Movie Database, is predominantly utilized for binary sentiment classification, categorizing reviews into 'positive' or 'negative' sentiments.The details of these datasets can be found in Table2.

Table 3
Results on SAR, Mod, and Sim metrics among the baselines and proposed methods on different datasets.The best performance is in bold.

Table 4
Results on PPL and GErr metrics among the baselines and proposed methods on different datasets.The best performance is in bold.

Table 5
Adversarial examples of the Emotion dataset for victim classifier BERT-C.Blue texts are original words, while red ones are substitutions.Besides the examples, the attack performance is measured by attacking success and confidence in making correct predictions.The lower confidence indicates better performance and the successful attacks and lowest confidence are bold.

Table 6
Performance metrics for RJA-MMR against the TextCNN model on the AG News dataset using varied word candidate selection methods.The best performances for each metric are highlighted in bold.

Table 7
Assessment of attack algorithms' efficiency on the Emotion dataset, utilizing empirical complexity (EC) in seconds per example for practical evaluation and total variance (TV) distance for theoretical convergence speed analysis.Lower EC values denote higher efficiency.The top three methods are highlighted in bold, italic, and underlined.

Table 8
Targeted attack and imperceptibility-preserving performance on the Emotion dataset.The victim models are BERT-C and TextCNN classifiers, and the baseline is PWWS.The statistics for better performance are vertically highlighted in bold.

Table 9
A comparative analysis of attack performance (SAR) against BERT-C when subjected to two defense mechanisms, FGWS and RanMASK, across IMDB and SST2 datasets.Performance metrics are highlighted in bold to emphasize superior results.

Table 10
POS preference with respect to choices of victim words among attacking methods.The tags with the horizontally highest and second highest proportion are bold and italic, respectively.

Table 11
Robustness of BERT Models of Different Sizes on the Emotion Dataset.These models are trained with the same datasets and hyper-parameter but with different numbers of transformer layers (L) and hidden embedding sizes (H).
• Availability of data and materials: All of the datasets are available on Huggingface (https://huggingface.co/datasets) and on our GitHub site (https: //github.com/MingzeLucasNi/RJA-MMR.git)• Code availability: All codes from our experiments are available at https: //github.com/MingzeLucasNi/RJA-MMR.git• Authors' contributions: Mingze Ni contributed to conceptualization, theoretical analysis, experiments and draft preparation; Zhensu Sun contributed to experiments, draft preparation, writing review; Wei Liu contributed to conceptualization, theoretical analysis, draft writing and editing.