Abstract
In recent years, a surge of criminal activities with crosscryptocurrency trades have emerged in Ethereum, the secondlargest public blockchain platform. Most of the existing anomaly detection methods utilize the traditional machine learning with feature engineering or graph representation learning technique to capture the information in transaction network. However, these methods either ignore the timestamp information and the transaction flow direction information in transaction network or only consider single transaction network, the crosscryptocurrency trading patterns in Ethereum are usually ignored. In this paper, we introduce a Multilayer Temporal Transaction Anomaly Detection (MT\(^2\)AD) model in Ethereum network with graph neural network. Specifically, for a given Ethereum token transaction network, we first extract its initial features including the structure subgraph and edge’s feature. Then, we model the temporal information in subgraph as a series of network snapshots according to the timestamp on each edge and time window. To capture the crosscryptocurrency trading patterns, we combine the snapshots from multiple token transactions at a given timestamp, and we consider it as a new combined graph. We further use the graph convolution encoder with attention mechanism and pooling operation on this new graph to obtain the graphlevel embedding, and we transform the anomaly detection on dynamic multilayer Ethereum transaction networks as a graph classification task with these graphlevel embeddings. MT\(^2\)AD can integrate the transaction structure feature, edge’s feature and crosscryptocurrency trading patterns into a framework to perform the anomaly detection with graph neural networks. Experiments on three realworld multilayer transaction networks show that the proposed MT\(^2\)AD (0.8789 Precision, 0.9375 Recall, 0.4987 FbMacro and 0.9351 FbWeighted) can achieve the best performance on most evaluation metrics in comparison with some competing approaches, and the effectiveness in consideration of multiple tokens is also demonstrated.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Avoid common mistakes on your manuscript.
Introduction
Blockchain, firstly introduced by Satoshi Nakamoto in his seminal paper,^{Footnote 1} is a distributed ledger technology that can maintain a growing number of data records without a centralized and trusted third party. Its public, decentralized, and tamperproof approach—first introduced to solve the double spending problem—has been extended to a wide range of other use cases, from supply chain tracking to secure voting. Ethereum [1], launched in 2015, is the first platform to implement smart contracts, and nowadays one of the most popular blockchainbased platforms to transact and exchange value, with a throughput of 14.8 transactions per second and almost 700000 daily active address on it [2].
With the recent boom of cryptoassets, the topic of anomaly detection in Ethereum [1] has received considerable attention. For example, in Ethereum, the unexpected appearance of particular subgraphs may indicate newly emerging malware or price pumpandpump trading [3]. Anomaly detection in blockchain transaction networks is an emerging area of research in the cryptocurrency community [4].
The existing methods about anomaly detection onEthereum transaction networks, networks where nodes denote the Ethereum transaction addresses and edges denote transactions between addresses, can be roughly divided into two categories [4]:

Traditional machine learning with manualextracted features: these methods first manually extract the transaction features (such as, transaction value), then, they apply traditional machine learning classification methods to perform the classification tasks. However, these feature engineering methods need expert knowledge, and the classification results mainly rely on the manualextracted features, which ignore the structural information of the transaction network resulting in suboptimal results. And these methods are unable to automatically extract the transaction network features.

Node representation learning: these methods adopt the random walk node representation learning [5, 6] or graph neural network (GNN) [4] technique to automatically learn the deep features in transaction network to obtain each node’s embedding for further classification task to identify the anomaly node and normal node. However, most of them ignore the timestamp information or the transaction flow direction information.
These methods overall achieve excellent results. However, there are still some problems yet to be solved, and room for further improvement:

1.
Multitoken transaction An address (account) may involve multiple token transaction network. For example, money laundering tends to move funds across multiple cryptocurrency ledgers [7].

2.
Temporal transaction information The existing methods either ignore the edge timestamp information, or only consider the last transaction record without fully model the information in transaction network.

3.
Growing of transaction data An amount of nodes in the transaction network, e.g., the largest number of new nodes in 1 day is more than 10,000 in BNB (Binance Coin) token [8]^{Footnote 2} transaction network. When 1year transaction information and multitoken transactions are considered, the transaction network becomes so large that is difficult to analyze.

4.
Edge’s feature The original constructed transaction network we crawl by Ethereum Client with EthereumETL is a weighted directed graph. There are transaction value and transaction flow direction information on each edge. The traditional graph neural network [9] technique is unsuitable for weighted directed networks. Because the adjacency matrix is symmetrical in the traditional graph neural network, and it is not easy to define a symmetric, realvalued laplacian for the directed graph.
To address the above challenges, we propose a Multilayer Temporal Transaction Anomaly Detection (MT\(^2\)AD) model in Ethereum network with graph neural network technique, and transform it as a graph classification task. It mainly contains three modules:

1.
Extraction of token transaction network initial feature. It includes the token subgraph sampling and modeling edge’s feature to address the challenges 3 and 4.

2.
Multilayer token transaction network snapshot construction. It includes the token network snapshot extraction of a given token subgraph and multiple token snapshots combination at the same timestamp to address the challenge 1 and 2.

3.
Anomaly detection with graph representation learning. It aims to transform the anomaly detection as a graph classification task.
We obtain the multiple token transaction information by Ethereum Client with EthereumETL,^{Footnote 3} and model each token transaction information as a temporal weighted directed graph, where nodes represent the transaction address, edges represent the transaction between two address and its direction denotes the transaction flow from one address to another address, edge weight denotes the transaction value, and timestamp information is on each edge. Extraction of token transaction network initial feature module includes subgraph feature sampling and edge’s feature modeling. Subgraph sampling component [10] is to sample nodes on each token transaction network. Because most daily token networks have <100 nodes [11], subgraph sampling can reduce the size of transaction network to be processed and has minimal impact on results. First, we extract fixedsize nodes according to the p most active edges in each transaction network to obtain a subgraph. Besides, as the subgraph is a weighted directed graph, each edge has transaction value and direction information, which are important features in a transaction network. In order to reduce information loss when learning graph representation with graph neural network encoder, we transform the transaction value and direction information on each edge as node attributes. Second, the multilayer token transaction network snapshot construction module is to extract time information and model the crosscryptocurrency trading patterns. We first transform each token transaction subgraph network as a series of network snapshots according to edge’s timestamp and time window \(\Delta t\). Then, multiple token network snapshots with the same timestamp will be combined as a new combined graph for further capturing the crosscryptocurrency trading patterns with graph neural network. Third, in anomaly detection with graph representation learning module, the graph neural network with attention mechanism [12] is utilized to encode each node information as a embedding on each combined graph. Not only the node’s structural context information, and the impact from its different neighborhoods are considered, but also the common information among different token transaction networks are captured with the same graph encoder. Then, the mean pooling operation is adopted on nodes’ embeddings of each combined graph to achieve entire graphlevel representation (i.e., a vector), for further graph classification (normal or abnormal). Extensive experiments are carried out on three realworld transaction network datasets to demonstrate the effect and effectiveness of our proposed method.
In summary, the key contributions of this paper are:

We introduce a Multilayer Temporal Transaction Anomaly Detection (MT\(^2\)AD) model in Ethereum network with graph neural network technique. Compared with the existing anomaly detection methods for blockchain transaction network, our MT\(^2\)AD considers the crosscryptocurrency trading patterns.

The anomaly detection in multitoken transaction network is naturally transformed as a graph classification task to classify the combined graph (normal or abnormal) with the graphlevel embedding.

Extensive experiments on three realworld transaction datasets including two Ethereum multitoken transaction networks and Ripple multiblockchains networks demonstrate the effectiveness of our proposed model in comparison with some competing approaches.
The remainder of this paper is organized as follows. In Sect. 2, we summarize some researches related to our work. “Problem definition” introduces the problem statements, and “Model” is our method. In “Experiments”, we carry out our experiments. “Conclusion” concludes this paper.
Related works
Node representation learning
Node representation learning [13,14,15,16,17,18], called node embedding or graph embedding or graph representation learning, is designed to learn lowdimensional dense vector (or embedding) for all nodes in a graph. It maps each node in a graph into a lowdimensional vector space, which captures the nodes’ similarity, network structure and/or other attributes. Inspired by the great success of deep neural networks in computer vision field, Kipf et al. [9] extended the convolutional operation from image data to graph structure data, and proposed Graph Convolutional Networks (GCNs). GCNs can learn both graph structural and node attributes information simultaneously through iteratively aggregating neighbor’s information to current node, i.e., message passing mechanism. On the basis of GCNs, GAT [12] (Graph Attention Networks) utilizes the attention mechanism to learn node importance of different neighbors, and GraphSAGE [19] is an inductive representation learning on large graphs. GraphSAGE generates node embeddings by sampling fixed local node neighborhoods and aggregates these node’s features. Since then, lots of graph neural network algorithms have been proposed. The representative methods are Graphormer [20], AutoGNAS [21], CSTGN [22] and DeepRankGNN [23].
Anomaly detection in blockchain transaction network
Trans2vec [6] is a phishing scam detection on Ethereum transaction network based on network embedding technique. It first constructed the Ethereum transaction network as a weighted directed network. Then, on the basis of node2vec [13], Trans2vec proposed a random walkbased network embedding method taking the timestamp and edge transaction value into consideration to sample node sequences. Finally, it utilizes oneclass support vector machine to perform node classification into normal or phishing account. LiangChen [5] proposed a phishing scam detection method in Ethereum transaction network based on graph autoencoder. It first utilized the random walk technique to sample a fixedsize subgraph for a random selected node. Then, it utilized GCN to extract the feature of each subgraph. Finally, the output of GCN and attributed matrix were concatenated as the final result for further phishing account classification with LightGBM.^{Footnote 4} Yijun [24] proposed an attributed egograph embedding framework to perform phishing detection on Ethereum transaction network through extracting egographs for each labeled account, and then learned graph representations for each egograph through a Skipgram model [25]. This article adopted a decision tree to perform detection task on the obtained embeddings. TTAGN [4] is a temporal transaction aggregation graph network representation learning method for Ethereum phishing scams detection. It modeled the temporal relation of historical transaction records between nodes to construct the edge representation of Ethereum transaction network. Besides, the edge representations were fused with topological interactive relations through the graph neural network technique for further phishing address detection. TSGN [26] is the transaction subgraph networks for identifying Ethereum phishing accounts. It built the Ethereum phishing account identification as a graph classification task. TSGN first constructed a set of transaction subgraph for each phishing address and normal address. Each obtained transaction subgraph had a label (phishing or normal). Then, TSGN learned the graph embedding of each transaction subgraph. The last step used the obtained graph embeddings to train a classification to predict the labels of each transaction subgraph. However, these anomaly detection methods based on the graph representation learning technique are all only taking single transaction network into consideration. A growing number of cryptocurrency criminals utilize crosscryptocurrency trades to hide their identity [7]. OforiBoateng [11] has demonstrated that considering multitoken transaction networks can improve the anomaly detection accuracy. In [11], a stacked persistence diagram (SPD) method were proposed to perform the topological anomaly detection on dynamic multilayer blockchain networks. However, the edge’s features were not taken into consideration, such as transaction amount between different accounts, node’s degree, etc. These features are important to classify account into normal and anomalous one. Besides, it is unable to automatically learn the deep features in transaction networks taking advantage of the deep learning technique. Readers can refer to the survey [27] about graph anomaly detection with deep learning technique.
Graph classification using graph neural network
Given a set of graphs \(\{{g_1},{g_2}, \ldots ,{g_n}\}\), and only partial graphs with labels, the graph classification is to predict the label of unseen graphs. Graph classification can be divided into graph kernel algorithms [28, 29] and neural networks technique [30]. Graph kernel methods [28, 29] is to define a distance to measure the similarity between two graphs, and classify them using this metric, including random walk kernel, shortest path kernel, Weisfeiler Lehman subtree kernel and deep graph kernel et al. However, graph kernel algorithms need handcrafted features (random walk, shortest path, etc.) and hence result in poor generalization. In this paper, we focus on using neural network technique to perform graph classification. The classical algorithm is Graph2vec [30]. It is the first neural embedding framework to learn datadriven distributed representations of arbitrary sized graphs, and is an unsupervised methods. Graph2vec introduced the Skipgram model [25] of word2vec to graph, and it considered a graph as a document, and the rooted subgraph around the nodes as words. On the basis of Graph2vec [30], GL2vec [31] utilized the line graphs (edgetovertex dual graphs) of input graphs to handle the edge labels and captured more structural information. Another graph classification algorithms using neural network technique are to perform multiple feature transformations on the input graphs by graph convolution operation, and then performs a pooling operation to reduce the scale of input graph. This process can be repeated many times, and finally the entire graphlevel representation (i.e., a vector) is obtained for further graph classification. The representative algorithms are DiffPool [32] and SAGPool [33]. GAUNets [34] is a graph neural network model that has strong processing capabilities for graph structures. The model is proposed for image classification and has been tested on MSCOCO and other datasets. The experimental results have proved that GAUNets performed better than other traditional graph neural network models. Graph UNets [35] is a graph neural network model that uses an encoder–decoder architecture for graph data to the node classification and graph classification tasks. DGCNNII [18] is an endtoend model called Deep Graph Convolutional Neural Network II, that can perform graph classification with up to 32 layers. It can extract multiscale features of nodes. Readers can refer to the survey [36] about the graph classification algorithms.
Problem definition
Definition 1
(Dynamic graph) A dynamic (or temporal) graph can be represented as \(G_d = \{ {G_{t_1}}, {G_{t_2}}, \ldots , {G_{t_T}} \}\), the \(G({t_t}) = (E({t_t}),V({t_t}))\) is the network snapshot at timestamp t, where \(E({t_t})\) is the edge set and \(\vert E({t_t}) \vert \ge 1\), all vertices in the edge set \(E({t_t})\) at timestamp t are nodes of \(G({t_t})\) denoted by \(V({t_t})\). The maximum timestamp is T. It should be noted that not all nodes in G will be known at timestamp t, as some new nodes may emerge at timestamp \(t'\) for any \(t'>t\).
Definition 2
(Multilayer graph) A multilayer graph can be represented as \(G_m = \{ {G^{l_1}}, {G^{l_2}}, \ldots , {G^{l_L}} \}\) with \(\vert L\vert \) layer graph, where \({G^l} = ({V^l},{E^l})\) is the lth graph, \(\vert V^l\vert \) or \(n^l\) and \(\vert E^l \vert \) represent the number of nodes and edges respectively.
Definition 3
(Attributed multilayer graph) When the nodes in the multilayer graph contain attributes information, this multilayer graph \(G_m\) is called attributed multilayer graph. And the node’s attributes matrix in \({G^{l_L}}\) is denoted by \(X^{l_L} \in {\mathrm{{\mathcal{R}}}^{n^{l_L} \times F}}\), where F is the node attribute dimension and \({n^{l_L}}\) is the number of nodes in \({G^{l_L}}\).
Definition 4
(Dynamic attributed multilayer graph) A dynamic attributed multilayer graph with T timestamps and L tokens transaction networks can be denoted as \(G_{dm} = \{ \{{ {G_{{t_1}}^{{l_1}}}, {G_{{t_1}}^{{l_2}}}, \ldots , {G_{{t_1}}^{{l_L}}} } \}, \ldots , \{{ {G_{{t_t}}^{{l_1}}}, {G_{{t_t}}^{{l_2}}}, \ldots , {G_{{t_t}}^{{l_L}}} } \}, \ldots , \{{ G_{{t_T}}^{{l_1}}, G_{{t_T}}^{{l_2}}, \ldots , G_{{t_T}}^{{l_L}} } \}\}\). Where \( {G_{{t_t}}^{{m}}} = \{{ {G_{{t_t}}^{{l_1}}}, {G_{{t_t}}^{{l_2}}}, \ldots , {G_{{t_t}}^{{l_L}}} } \} \) is the multilayer graph at timestamp t. Besides, all nodes in \(G_{dm}\) have node attributes with same dimension F.
Definition 5
(Graph classification with representation learning) The purpose of graph classification is to predict the class of a graph. In this paper, we consider the anomaly detection in dynamic attributed multilayer token transaction network as a graph classification task.
Learning a map function f to encode n nodes in an attributed multilayer graph \({G_{{t_t}}^{{m}}}\) at timestamp t into ddimensional dense vector space, which can be represented as a matrix \(H \in {\mathcal{R}^{n \times d}}(0 < d \le n), H = \{h_1, h_2, \ldots , h_n\} \). Then, the pooling operation (mean pooling or max pooling etc.)—readout function [36]—is adopted on matrix H to generate a vector (mean pooling example: \(\mathcal{R}(H) = \sigma (\frac{1}{n}\sum \nolimits _{i = 1}^n {{h_i}} )\), \(\sigma \) is the Sigmoid function). \(\mathcal{R}(H)\) is the graphlevel representation of \({G_{{t_t}}^{{m}}}\) and contains the entire graph information. There are T multilayer token graph snapshots, which means we can achieve T graphlevel representation vectors. The classifier is trained with those T graphlevel embeddings to classify its corresponding combined graph as normal or abnormal.
Model
Overall framework
The proposed MT\(^2\)AD, a Multilayer Temporal Transaction Anomaly Detection method in Ethereum networks with GNN, includes three modules, which are extraction of token transaction network initial feature module, multilayer token transaction network snapshot construction module, and anomaly detection with graph representation learning module. The over framework is shown in Fig. 1.
Extraction of token transaction network initial feature
We obtain the ERC20 token transaction records by Ethereum Client with EthereumETL. According to the specification, ERC20 is the Ethereum standard for fungible tokens [8], and is the interface that a smart contract can implement to exchange this kind of tokens. In this paper, the Binance Coin (BNB), Tether (USDT)^{Footnote 5} and Chainlink (LNK)^{Footnote 6} ERC20 Ethereumbased token transaction records are extracted by Ethereum Client with EthereumETL. We construct transaction records of each token as a transaction network, where nodes represent the addresses and edges represent the transaction between a pair of addresses. It should be noted that the token transaction network is a temporal weighted directed network, where weight represents the transaction amount from the source address to the destination address. The timestamp information (an integer number here) on each edge represent the time that transaction is executed. Given the large size of the original token transaction network, in extraction of its initial feature module, we will first extract its subgraph feature (i.e., subgraph sampling), and then model edge’s feature as node attributes for further graph convolution operation [9].
Token subgraph sampling
The scale of original transaction network is so large, it is not possible to address such a large dataset using the common GPU with deep learning technique. Taking BNB token transaction network as an example, the number of new daily nodes reaches up to around 10,000 in just 1 day, as shown in Fig. 2. If the BNB transaction information of 1 year are taken into consideration, the transaction network will be very large. Therefore, to maintain reasonable computation time, the first step is to sample the original token transaction network to achieve a subgraph that retains the prominent feature. The common and simple sample method is the random walk technique, which has been utilized in some related works [26]. However, because of the inherent randomness of random walk, therefore, we adopt the maximum weight subgraph sampling method [10] on original token transaction network to achieve a small size transaction subgraph. We restrict the size of transaction subgraph through extracting the p most active edges. The obtained transaction subgraph has minimal impact on results. The reason is that even for the most traded tokens (Tronix and Bat), only top 150 nodes in daily networks form 75% and 80% of all edges [11]. Subgraph sampling is not only able to reduce the computation time and GPU requirement, but also retain the important feature as much as possible [11].
Modeling edge’s feature
There are transaction values and direction information on each edge in each transaction network, and these information are basic and important feature in a transaction network. However, traditional graph convolutional network technique [9] is unsuitable for weighted directed graph. If the transaction values and direction information are transformed as node attributes, to some extent, it can reduce the information loss when learning graph representations with graph convolution operation. In this article, we utilize node attribute matrix X to model edge’s transaction values and direction information. They are described in the following way:

Outdegree For a given node, the outdegree means the outgoing edges [37]. In transaction network, it represents the number of transactions that the current node sends to other nodes.

Indegree For a given node, the indegree means the ingoing edges [37]. In transaction network, it represents the number of transactions that other nodes send to the current node.

Degree For a given node, the degree means the onehop neighborhoods [37]. In transaction network, it represents the total number of edges (or transaction) about the current node. The degree can reflect the activity or importance of a node in a graph.

Normalized transaction amount The transaction amount represents the weight between a pair of addresses. For simplify the calculation and better comparison, we normalized the weight (transaction amount) in [0,1] through dividing by the maximum of all node transactions.
Therefore, edge’s feature is transformed as node attributes \(X \in {\mathrm{{\mathcal{R}}}^{n \times 4}}\), which contains four dimensions and n is the number of nodes.
Multilayer token transaction network snapshot construction
In “Modeling edge’s feature”, the transaction values and transaction flow between linked nodes have been transformed as node attributes. In this section, we will construct the token network snapshot according to edge’s timestamp of a given token transaction network. It mainly includes two steps. First step is to extract token network snapshot according to edge’s timestamp of each token transaction network. Second, at given timestamp t, the multiple token network snapshot \({G_{{t_t}}^{{l_1}}}, {G_{{t_t}}^{{l_2}}}, \ldots , {G_{{t_t}}^{{l_L}}}\) is combined as a new combined graph (i.e., \( {G_{{t_t}}^{{m}}} = \{{ {G_{{t_t}}^{{l_1}}}, {G_{{t_t}}^{{l_2}}}, \ldots , {G_{{t_t}}^{{l_L}}} } \} \) ) for downstream graph representation learning. It is shown in Fig. 1b.
Token network snapshot extraction
Given the lth dynamic token transaction network \(G_d \), we transform it as a series of network snapshots according to edge’s timestamp. The obtain network snapshots can be represented as \( {G_{{d}}^{{l_l}}} = \{{ {G_{{t_1}}^{{l_l}}}, {G_{{t_2}}^{{l_l}}}, \ldots , {G_{{t_T}}^{{l_l}}} } \} \), where \(t^{\prime } = t + \Delta t\) and \(\Delta t\) is the time window. It means \(t_2 = t_1 + \Delta t\). \(\Delta t\) can be set as different time value according to the requirements of different analysis tasks. In this paper, \(\Delta t\) is set as 1 day (24 h). For different token transaction networks (i.e., different layers \(l_l,l_2, \ldots ,l_L\)), we can obtain its corresponding network snapshots according to timestamp, \( {G_{{d}}^{{l_1}}} = \{{ {G_{{t_1}}^{{l_1}}}, {G_{{t_2}}^{{l_1}}}, \ldots , {G_{{t_T}}^{{l_1}}} } \}, {G_{{d}}^{{l_2}}} = \{{ {G_{{t_1}}^{{l_2}}}, {G_{{t_2}}^{{l_2}}}, \ldots , {G_{{t_T}}^{{l_2}}} } \}, \ldots , {G_{{d}}^{{l_L}}} = \{{ {G_{{t_1}}^{{l_L}}}, {G_{{t_2}}^{{l_L}}}, \ldots , {G_{{t_T}}^{{l_L}}} } \} \)
Multiple token snapshot combination
It is critical to analyze crosscryptocurrency transactions [7] for identifying illicit activities on blockchain. The existing research [7] has shown that the patterns of behavior is largely based on the role they play in tracking money as it moves across the ledgers of different cryptocurrencies [7]: therefore, there are common patterns of behavior across different token transaction networks. However, in “Token network snapshot extraction”, we only obtain a series of network snapshots of each dynamic token transaction network respectively. To take advantage of different token transaction networks information and capture common feature among different token transaction networks with graph representation learning, we combine multiple token transaction networks information at a given timestamp \({t_t}\) as a graph \( {G_{{t_t}}^{{m}}} = \{{ {G_{{t_t}}^{{l_1}}}, {G_{{t_t}}^{{l_2}}}, \ldots , {G_{{t_t}}^{{l_L}}} } \} \). \({G_{{t_t}}^{{m}}}\) is the combined transaction graph from \({l_1},{l_2}, \ldots , {l_L}\) token transaction networks at timestamp \({t_t}\), i.e., network snapshots \( \{{ {G_{{t_t}}^{{l_1}}}, {G_{{t_t}}^{{l_2}}}, \ldots , {G_{{t_t}}^{{l_L}}} } \} \). In this way, we can obtain T combined transaction graph \({G_{{t_1}}^{{m}}}, {G_{{t_2}}^{{m}}}, \ldots , {G_{{t_T}}^{{m}}}\) for further graph classification.
Anomaly detection with graph representation learning
In this section, we will introduce our graph encoder to learn node embedding of each combined transaction graph (i.e., \({G_{{t_t}}^{{m}}}, {t_t} \in \{ {t_1},{t_2}, \ldots ,{t_T}\}\)) for graph pooling operation to achieve entire graphlevel embedding, and further graph classification anomaly detection to predict the label of each combined transaction graph. The labels of each combined transaction graph are generated according to the Blockchain daily events from Wikipedia.^{Footnote 7} And these labels are also used in the article [11]. Therefore, we construct the labels of each combined transaction graph according to the Blockchain daily events from Wikipedia. The labels of each combined transaction graph is anomaly and normal. Therefore, in the paper, we transform the anomaly detection as a graph classification task. We learn the graphlevel embedding of each combined transaction graph. Then, we use the graph embeddings to train the graph classifier to predict the labels of each combined transaction graph. The multiple token transaction networks anomaly detection with graph representation learning is shown in Fig. 3, and it includes Graph Convolution Layer, Pooling Layer and Graph Classification Layer.
Graph information encode
Graph convolution layer To perform the anomaly detection on combined graph with representation learning technique, we first have to obtain each node embedding in \({G_{{t_t}}^{{m}}}\). The pooling operation is then utilized on all node embeddings in \({G_{{t_t}}^{{m}}}\) to achieve the overall graphlevel embedding, which is a vector retaining overall graph feature information used for further graph classification. However, in token transaction networks, the node usually conducts a transaction with multiple nodes at the same time and the interactive node pairs usually have similarity characteristic. For example, if a node is a black node in black market transaction, nodes with direct transactions with this black node may also be black. Besides, the nodes itself in original transaction network only have structure feature, and the transformed node attributes from edge’s features in “Modeling edge’s feature” are sparse. If only the structure feature and node attributes of node itself are taken into consideration when performing the graph encoder operation, it will obtain an unsatisfied node embedding. To solve this problem, we take advantage of the neighborhood information by aggregation operations which integrates the information from its neighbors to the current node. However, different neighbor nodes have different effects to the current node. Therefore, we utilize the multihead attention mechanism [12] to capture the difference.
For each node in a given transaction network, we first learn the effect weights between the node itself and its neighbor nodes. The weighted aggregation operation will then be performed according to the obtained weights. For a given node \({n_i}\), its onehop neighborhoods are \({N({n_i})}\), and \({n_j} \in {N({n_i})}\). The effect weights can be learned given the feature of \({n_i}\) and its neighborhoods \({N({n_i})}\) with the following method:
where \({h_i}\) and \({h_j}\) are the node features of \({n_i}\) and \({n_j}\), respectively. W is the learnable parameter matrix. \(a_w^T\) is the attention parameter matrix. \(\cdot ^T\) denotes the transpose and \(\Vert \) is the concatenate operation. \(\sigma \) is the activation function.
Then, the softmax function is adopted to transform the effect weights coefficients to [0, 1] in Eq. 2.
Finally, the node embedding of \({n_i}\) can be generated through the aggregation operation with multihead attention [12] in Eq. 3. The schematic diagram of multihead attention is shown in Fig. 4.
where \({\Vert }\) represents the concatenation operation, \({a_{ij}^{k}}\) are normalized attention coefficients through the kth attention (\({{a^{k}}}\)). \({W^k}\) is the linear transformation matrix. \({h_j}\) is the node embedding of neighborhood of node \({n_i}\).
Pooling layer Through the aggregation operation with attention mechanism, we can obtain each node embedding (\({H_{{t_1}}^{{m}}},{H_{{t_2}}^{{m}}}, \ldots ,{H_{{t_T}}^{{m}}}\), where \({H_{{t_t}}^{{m}}} = \{h_1, h_2, \ldots , h_n\}_{{t_t}}^{{m}}\)) in combined graph at each timestamp i.e., \({G_{{t_1}}^{{m}}},{G_{{t_2}}^{{m}}}, \ldots ,{G_{{t_T}}^{{m}}}\). Then, we adopt readout function (mean pooling operation [38], Eq. 4) on all nodes of a given graph \({G_{{t_t}}^{{m}}}\) to achieve the overall graphlevel representation, which means to reduce a whole graph to a single vector for further graph classification anomaly detection.
where \(\sigma \) is the Sigmoid function, \(\mathcal{R}\) means the readout function.
Graph classification loss function
In this paper, we utilize the graph classification task to perform anomaly detection, and each \({G_{{t_t}}^{{m}}}\) corresponds to a label (normal or abnormal). We minimize the crossentropy loss to train the proposed model, which is as follows:
where S represents the training samples, \({y_i}\) is the label of graph \({g_i}\), \({\hat{y}_i}\) is the predicted label of graph \({g_i}\). C represents the number of graph classes. \(\theta \) is the learnable parameters, and we summarize it as \(\theta = \{ {W},{e}\} \).
The loss function is updated by the gradient descent method, and the parameters \(\theta \) are learned via backpropagation algorithm. While the loss function \({L_{GC}}\) is not converged, the partial derivative \(\frac{{\partial {L_{GC}}}}{{\partial \theta }}\) is calculated and the parameters \(\theta \) are optimized via the backpropagation algorithm. The MT\(^2\)AD algorithm is summarized in Algorithm 1.
Time complexity
The MT\(^2\)AD include three modules: extraction of token transaction network initial feature, multilayer token transaction network snapshot construction, and anomaly detection with graph representation learning. In the module about the extraction of token transaction network initial feature, the token subgraph sampling and modeling edge’s feature can be calculated in advance. And in the module about multilayer token transaction network snapshot construction, the token network snapshot extraction and multiple token snapshot combination can also be calculated in advance. Therefore, the time complexity of the MT\(^2\)AD is mainly about the anomaly detection with graph representation learning. We use the graph convolutional network with multihead attention mechanism to extract the features of the T combined transaction graph, and the average pooling is used to obtain the graphlevel representation vector. The time complexity depends on the type of convolution and the number of layers. The convolution operation on graphs involves multiplying the node features by a weight matrix and then aggregating the features from the neighboring nodes. The weight matrix multiplication has a time complexity of \(O(\vert V \vert F F')\), where F is the dimension of node features and \(F'\) is the number of output feature. The aggregation step has a time complexity of \(O(\vert E \vert F')\), where \(\vert E \vert \) is the number of edges in the graph. Therefore, the overall time complexity of one convolution layer is \(O(\vert V \vert FF' + \vert E \vert F')\). The attention mechanism involves computing a pairwise similarity score between each node and its neighbors, and then applying a softmax function to normalize the scores. The time complexity of the attention mechanism is \(O( F \vert V \vert ^2)\). In this paper, we use the multihead attention, each head’s calculations are independent, and can be paralleled. As for the average pooling, the time complexity is \(O(\vert V \vert )\), and \(\vert V \vert \ll {\vert V \vert ^2}\). Therefore, the overall time complexity of MT\(^2\)AD can be simplified as \(O(\vert V \vert FF' + \vert E \vert F' + F \vert V \vert ^2)\).
Experiments
In this section, experimental results of the proposed MT\(^2\)AD for anomaly detection method are presented. We aim to answer the following Research Questions (RQ):

RQ1 How the proposed method benefits the anomaly detection of multiple token transaction network?

RQ2 How is the feasibility of the model? i.e., loss value change with epoch (it means the number of complete run of training set data in the algorithm.) and singlelayer token v.s. multilayer token transaction network.

RQ3 How different parameters (different dimension sizes and different multiheads attention sizes) affect the model?
Implementation details
Baselines
To illustrate the effectiveness of our methods, we compare our proposed MT\(^2\)AD with some competing methods:

MLP [39]: Multilayer Perceptron (MLP) is a fully connected classification of feedforward artificial neural network (ANN). In this experiment, we use MLP to learn the deep feature in transaction network instead of graph convolutional network. This means that only the node attributes are considered in this setup, without the graph transaction structure subgraph.

GCN [9]: GCN is a graph convolutional network to perform the aggregation operations between the current node and its neighbors. In this setup, the attention mechanism with multiple heads are not considered. It means that different neighbors have equal influence on the current node.

FEATHER [29]: FEATHER is an algorithm for the representation learning of nodelevel and graphlevel. It considers the neighborhood feature distributions. It is to calculate a specific variant of the characteristic functions defined on graph vertices to describe the distribution of vertex features at multiple scales. These characteristic functions are effectively to create node embeddings. The features extracted by this procedure are useful for machine learning tasks.

Graph2vec [30]: Graph2vec is a first neural embedding method to learn representation of entire graph in an unsupervised manner. Graph2vec is similar to doc2vec method [40], which needs a corpus of graphs to learn representations and follows the doc2vec Skipgram [25] training process.

GL2vec [31]: On the basis of Graph2vec, GL2vec takes the edge labels into consideration, and it utilizes the line graphs (edgetovertex dual graphs) of input graphs. The results of graph embedding is the concatenation of embedding of the input graph and its corresponding line graph.
Datasets
We first obtained the BNB, LNK, USDT token transaction records. As the labels of blockchain transaction network have been verified in [11]. These labels are generated according to the Blockchain events from Wikipedia. Therefore, the time span of BNB, LNK and USDT are same as the label’s. The end time of BNB, LNK and USDT are all May 2018. The start time of BNB, LNK and USDT are July 2017, September 2017 and November 2017, respectively. We use D1 to represent our own dataset. Besides, we also use Ethereum Token Networks (bytom, cybermiles, decentraland, tierion, vechain and zrx) and Ripple Currency Networks (JPY, USD, EUR, CCK and CNY) from [11]^{Footnote 8} to evaluate the performance of different methods. The D2 and D3 are utilized to represent these two datasets, respectively. Because these two original datasets D2 and D3 only have transaction structure feature. Therefore, we also transform edge’s transaction value and transaction flow direction information as nodes’ attributes. The statistics of these three datasets are shown in Table 1.
Evaluation metrics
The same as article [5], we use metrics Precision, Recall, Fvalue to evaluate the performance of different methods on anomaly detection. Because Fbeta score can adjust relative weight between Precision and Recall with \(\beta \) coefficient to achieve better results (Eq. 6).
Therefore, in this paper, we use metrics Fbeta–Macro and FbetaWeighted instead of F1value to evaluate the performance of different methods on anomaly detection. The \(\beta \) value is 5, 7 and 7 for D1, D2 and D3 respectively.
Parameter settings
The graph embedding dimension d of all methods is set to 8. For attention mechanism, the size of multihead attention is set to 2. In our model, we utilize one convolution layer to aggregate the information from its neighbors. Therefore, the convolution size is \(F \times 4\). The learning rate and weight decay are set to 0.0005 and 1e−5, respectively. For GCN, the parameters were set from their official implementations. FEATHER, Graph2vec and GL2vec code implemented in karateclub.^{Footnote 9}
Performance results (RQ1)
To answer RQ1, we compare the performance of all methods in the task of anomaly detection on multiple token transaction network. The results of anomaly detection are shown in Table 2. We have the following observations:

Compared with the MLP method, the GCN has greatly improved the performance. The reason is that MLP only considered the node attributes extracted manually, which is not enough to describe the information of multiple token transaction network. GCN takes the node attributes and structure of transaction network simultaneously, and it performs the aggregation operations to integrate the information from neighbors. This demonstrates the advantage of graph convolutional network and the importance of considering the network structure. In transaction network, transaction addresses are anonymous, and the nodes themselves have no attributes. So the difference between nodes is mainly on the transaction structure and edge’s attributes. In this paper, we transform edge’s attributes (timestamp and transaction value) into node attributes for further taking advantage of the graph convolutional network to extract depth feature.

In most cases, the performance of FEATHER is better than Graph2vec and GL2vec. The reason is that FEATHER considers the neighborhood feature when creating the node embeddings. The idea of taking the neighborhood feature information into consideration is consistent with ours. Compared with FEATHER and our method, our method can achieve better results. It demonstrates the advantage of our method with graph neural network technique.

From the results of GCN method and our proposed approach, it shows that our method achieve excellent results. The reason is that we use multihead attention mechanism to aggregate information from neighbors. It can capture the difference of influence among different neighbors.

Compared with Graph2vec and GL2vec method, our method and GCN achieve better performance. This demonstrates the advantage of graph neural network comparing with skipgram when processing graph network data. Graph neural network is able to capture the node pairs’ similarity from nodes’ attributes and structure simultaneously.
Study of the proposed model (RQ2)
Loss change
To answer RQ2, the D1 dataset is taken as an example to explain the change of loss value on the training and test dataset, and the result is shown in Fig. 5. The xaxis represents the number of epoch and the yaxis represents the changes of loss value. It shows that the training loss value decreases constantly, and it gradually reaches a steady state. The test loss value also decreases during training the model. This demonstrates the effectiveness of our proposed method.
Multiple tokens
In this section, we take D1 dataset as an example to evaluate the anomaly detection results on singlelayer and multilayer transaction network, respectively. The results are shown in Table 3. We can observe that the performance on single token transaction network is unsatisfactory when comparing with overall multilayer token transaction network (three tokens in here). The reason is that only single token is unable to model comprehensive information of transaction network, as there are crosscryptocurrency trading patterns. MT\(^2\)AD can describe the crosscryptocurrency trading patterns, and hence it improves the performance. Besides, we use the same graph encoder on all token transaction networks to learn node embedding, this can not only capture each token respective characteristics through the convolution operation according to each token transaction network structure and node attributes, but also capture the common feature on all these multiple token transaction networks through the shared graph encoder.
Ablation experiment
In this section, we do the ablation experiment to test the transformed edge’s feature as node attribute matrix. It means that we only consider the transaction structure information. That is to say, the input of the graph encoder is the adjacency matrix and the identity node attribute matrix. We use the identity node attribute matrix to replace the transformed node attribute matrix. In this case, the edge’s feature (outdegree, indegree, degree and normalized transaction amount) is not taken into consideration. We take the D1 dataset as an example to test the ablation experiment. The results are shown in Table 4. It shows that when the edge’s feature is ignored, the performance is degraded. Because the identity node attribute matrix do not have any transaction flow and transaction amount information. This demonstrates that the transformed edge’s feature as node attribute matrix can make the graph embedding retain transaction edge’s information when performing the graph convolutional operation.
Sensitivity analysis (RQ3)
To answer RQ3, we further evaluate the performance of our proposed method in terms of different dimension size and multihead attention size to see how these two hyperparameters affect the performance.
Effect of attention size
As for the attention size when aggregating the information from different neighbors, the performance changes of our model are shown in Fig. 6. We can observe that the performance of our method on these three datasets achieves different trends under different attention size. On D1 dataset, the FbMacro remains nearly constant, while on D2 and D3 dataset, the FbMacro achieves different values on different dimension sizes. The other three evaluation metrics on these three datasets achieve similar results. It shows that our method can still achieve good performance when the attention size is small (attention size is 2). This demonstrates the importance taking the neighborhoods information and its different influences into consideration.
Effect of embedding size
Embedding size is another hyper parameter that will affect the performance of our method. We vary the embedding size from 4 to 64 to investigate how it influences the graph classification about the anomaly detection on transaction network, and the results are shown in Fig. 7. It can be seen that high embedding size tends to overfitting, and hence the performance decreases. Besides, high embedding size will increase the computational cost. On the contrary, a low embedding size might be unable to capture the transaction network information. Hence, the performance is not good: for example on the results of D1 and D2 datasets, the performance about Precision metrics on a small embedding size (\(d=4\)) is relatively worse than a large embedding size (\(d=8\)). Therefore, we take the embedding size as 8 to balance the computational cost and performance.
Conclusion
In this paper, we proposed an anomaly detection model in multilayer Ethereum transaction networks with graph representation learning technique and demonstrated its effectiveness through three research questions. We first showed the performance of anomaly detection in comparison with some baselines, which demonstrated the competitiveness of our proposed method. Then, we analyzed the loss value change with epoch and singlelayer token versus multilayer token transaction network to show the feasibility of our method. Finally, we analyzed the sensitivity to parameters on different embedding dimension sizes and different multihead attention sizes. The key of the proposed model is that our method transforms the anomaly detection into a graph classification task, and model the edge’s features as node’s attributes, which can take advantage of the graph convolution network with attention mechanism integrating the transaction structure and node attributes naturally. Besides, our method can capture crosscryptocurrency trading patterns on multiple token transaction networks through multilayer token transaction network snapshot construction module and graph encoder module. These strategies improve the model’s overall performance of anomaly detection.
As a next step, in the future, we will explore the unsupervised graph neural network method to analysis the Bitcoin transaction network. This is specifically important as labelling Bitcoin addresses is a fundamental problem to develop further the growing field of blockchain analytics.
Notes
USDT: 0xdAC17F958D2ee523a2206206994597C13D831ec7.
Chainlink (LNK): 0x514910771AF9Ca656af840dff83E8264EcF986CA.
References
Wang Z, Jin H, Dai W, Choo KKR, Zou D (2021) Ethereum smart contract security research: survey and future research opportunities. Front Comp Sci 15:1–18
Chen H, Pendleton M, Njilla L, Xu S (2020) A survey on ethereum systems security: vulnerabilities, attacks, and defenses. ACM Comput Surv 53(3):1–43
Xu J, Livshits B (2019) The anatomy of a cryptocurrency pumpanddump scheme. In: Proceedings of the 28th USENIX conference on security symposium. SEC’19. USENIX Association, USA, pp 1609–1625
Li S, Gou G, Liu C, Hou C, Li Z, Xiong G (2022) Ttagn: temporal transaction aggregation graph network for ethereum phishing scams detection. In: Proceedings of the ACM Web conference 2022. WWW ’22. Association for Computing Machinery, New York, NY, USA, pp. 661–669
Chen L, Peng J, Liu Y, Li J, Xie F, Zheng Z (2020) Phishing scams detection in ethereum transaction network. ACM Trans Internet Technol 21(1):1–16
Wu J, Yuan Q, Lin D, You W, Chen W, Chen C, Zheng Z (2022) Who are the phishers? phishing scam detection on ethereum via network embedding. IEEE Trans Syst Man Cybernet Syst 52(2):1156–1166
Yousaf H, Kappos G, Meiklejohn S (2019) Tracing transactions across cryptocurrency ledgers. In: Proceedings of the 28th USENIX conference on security symposium. SEC’19. USENIX Association, USA, pp 837–850
De Collibus FM, Partida A, Piškorec M, Tessone CJ (2021) Heterogeneous preferential attachment in key ethereumbased cryptoassets. Front Phys 9:568
Kipf TN, Welling M (2017) Semisupervised classification with graph convolutional networks. In: 5th International conference on learning representations, ICLR 2017, Toulon, France, April 24–26, 2017, conference track proceedings
Vassilevska V, Williams R, Yuster R (2006) Finding the smallest hsubgraph in real weighted graphs and related problems. In: Bugliesi M, Preneel B, Sassone V, Wegener I (eds) Automata, languages and programming. Springer, Berlin, pp 262–273
OforiBoateng D, Dominguez IS, Akcora C, Kantarcioglu M, Gel YR (2021) Topological anomaly detection in dynamic multilayer blockchain networks. In: Oliver N, PérezCruz F, Kramer S, Read J, Lozano JA (eds) Machine learning and knowledge discovery in databases research track. Springer, Cham, pp 788–804
Velickovic P, Cucurull G, Casanova A, Romero A, Liò P, Bengio Y (2018) Graph attention networks. In: 6th International conference on learning representations, ICLR
Grover A, Leskovec J (2016) Node2vec: scalable feature learning for networks. In: Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining. KDD ’16. Association for Computing Machinery, New York, NY, USA, pp 855–864
Perozzi B, AlRfou R, Skiena S (2014) Deepwalk: online learning of social representations. In: Proceedings of the ACM SIGKDD international conference on knowledge discovery and data mining, pp 701–710
Wei L, He Z, Zhao H, Yao Q (2023) Search to capture longrange dependency with stacking gnns for graph classification. In: Proceedings of the ACM Web conference 2023. WWW ’23. Association for Computing Machinery, New York, NY, USA, pp 588–598
Zhang B, Luo S, Wang L, He D (2023) Rethinking the expressive power of GNNs via graph biconnectivity. In: The Eleventh international conference on learning representations
Chamberlain BP, Shirobokov S, Rossi E, Frasca F, Markovich T, Hammerla NY, Bronstein MM, Hansmire M (2023) Graph neural networks for link prediction with subgraph sketching. In: The eleventh international conference on learning representations
Zhou Y, Huo H, Hou Z, Bu F (2023) A deep graph convolutional neural network architecture for graph classification. PLoS One 18(3):e0279604
Hamilton WL, Ying R, Leskovec J (2017) Inductive representation learning on large graphs. In: Proceedings of the 31st international conference on neural information processing systems. NIPS’17. Curran Associates Inc., Red Hook, NY, USA, pp 1025–1035
Ying C, Cai T, Luo S, Zheng S, Ke G, He D, Shen Y, Liu TY (2021) Do transformers really perform badly for graph representation? In: Thirtyfifth conference on neural information processing systems
Chen J, Gao J, Chen Y, Oloulade BM, Lyu T, Li Z (2022) Autognas: a parallel graph neural architecture search framework. IEEE Trans Parall Distrib Syst 33:1–1
Hashemi F, Behrouz A, Hajidehi MR (2023) Cstgn: community search via temporal graph neural networks. arXiv preprint arXiv:2303.08964
Réau M, Renaud N, Xue LC, Bonvin AM (2023) Deeprankgnn: a graph neural network framework to learn patterns in proteinprotein interfaces. Bioinformatics 39(1):759
Xia Y, Liu J, Wu J (2022) Phishing detection on ethereum via attributed egograph embedding. IEEE Trans Circuits Syst II Express Briefs 69(5):2538–2542
Mikolov T, Sutskever I, Chen K, Corrado G, Dean J (2013) Distributed representations of words and phrases and their compositionality. In: Proceedings of the 26th international conference on neural information processing systems—volume 2. NIPS’13, Red Hook, NY, USA, pp 3111–3119
Wang J, Chen P, Yu S, Xuan Q (2021) Tsgn: transaction subgraph networks for identifying ethereum phishing accounts. In: International conference on blockchain and trustworthy systems. Springer, pp 187–200
Ma X, Wu J, Xue S, Yang J, Zhou C, Sheng Q, Xiong H, Akoglu L (2021) A comprehensive survey on graph anomaly detection with deep learning. IEEE Trans Knowl Data Eng:1–1
Nikolentzos G, Siglidis G, Vazirgiannis M (2022) Graph kernels: a survey. J Artif Intell Res 72:943–1027
Rozemberczki B, Sarkar R (2020) Characteristic functions on graphs: birds of a feather, from statistical descriptors to parametric models. In: Proceedings of the 29th ACM international conference on information & knowledge management
Narayanan A, Chandramohan M, Venkatesan R, Chen L, Liu Y, Jaiswal S (2017) Graph2vec: learning distributed representations of graphs. In: 13th International workshop on mining and learning with graphs (MLGWorkshop 2017)
Chen H, Koga H (2019) Gl2vec: graph embedding enriched by line graphs with edge features. In: Gedeon T, Wong KW, Lee M (eds) Neural information processing. Springer, Cham, pp 3–14
Ying R, You J, Morris C, Ren X, Hamilton WL, Leskovec J (2018) Hierarchical graph representation learning with differentiable pooling. In: Proceedings of the 32nd international conference on neural information processing systems. NIPS’18. Curran Associates Inc., Red Hook, NY, USA, pp 4805–4815
Lee J, Lee I, Kang J (2019) Selfattention graph pooling. In: Proceedings of the 36th international conference on machine learning, pp 3734–3743
Zhao H, Zhang C (2021) Gaunets: graph attention unets for image classification. J Phys Conf Ser 1861:012045
Gao H, Ji S (2022) Graph unets. IEEE Trans Pattern Anal Mach Intell 44(9):4948–4960
Zhaohui W, Huawei S, Qi Cao XC (2022) Survey on graph classification. J Softw 33:171–192
Boccaletti S, Latora V, Moreno Y, Chavez M, Hwang DU (2006) Complex networks: structure and dynamics. Phys Rep 424(4):175–308
Mesquita D, Souza A, Kaski S (2020) Rethinking pooling in graph neural networks. Adv Neural Inf Process Syst 33:2220–2231
von der Malsburg C (1986) Frank rosenblatt: Principles of neurodynamics: perceptrons and the theory of brain mechanisms. Brain Theory:245–248
Le Q, Mikolov T (2014) Distributed representations of sentences and documents. In: Proceedings of the 31st international conference on international conference on machine learning—volume 32. ICML’14
Acknowledgements
This paper is support by the National Natural Science Foundation of China (NSFC) under grant number 61873274.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
No conflict of financial interests or personal relationships exit in the submission of this manuscript, and it is approved by all authors for publication.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Han, B., Wei, Y., Wang, Q. et al. MT\(^2\)AD: multilayer temporal transaction anomaly detection in ethereum networks with GNN. Complex Intell. Syst. 10, 613–626 (2024). https://doi.org/10.1007/s4074702301126z
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s4074702301126z