Long short-term enhanced memory for sequential recommendation

Sequential recommendation is a stream of studies on recommender systems, which focuses on predicting the next item a user interacts with by modeling the dynamic sequence of user-item interactions. Since being born to explore the dynamic tendency of variable-length temporal sequence, Recurrent Neural Networks (RNNs) have been paid much attention in this area. However, the inherent defects caused by the network structure of RNNs have limited their applications in sequential recommendation, which are mainly shown on two factors: RNNs tend to make point-wise predictions and ignore the collective dependencies because the temporal relationships between items change monotonically; RNNs are likely to forget the essential information during processing long sequences. To solve these problems, researchers have done much work to enhance the memory mechanism of RNNs. However, although previous RNN-based methods have achieved promising performance by taking advantage of external knowledge with other advanced techniques, the improvement of the intrinsic property of existing RNNs has not been explored, which is still challenging. Therefore, in this work, we propose a novel architecture based on Long Short-Term Memories (LSTMs), a broadly-used variant of RNNs, specific for sequential recommendation, called Long Short-Term enhanced Memory (LSTeM), which boosts the memory mechanism of original LSTMs in two ways. Firstly, we design a new structure of gates in LSTMs by introducing a “Q-K-V” triplet, a mechanism to accurately and properly model the correlation between the current item and the user’s historical behaviors at each time step. Secondly, we propose a “recover gate” to remedy the inadequacy of memory caused by the forgetting mechanism, which works with a dynamic global memory embedding. Extensive experiments have demonstrated that LSTeM achieves comparable performance to the state-of-the-art methods on the challenging datasets for sequential recommendation.


Introduction
his computer, keyboard and even earphones, but irrelevant with the shoes he/she just bought before the mouse.
Another contributing factor is that RNNs are likely to forget the essential information during processing long sequences. The vanilla RNNs suffer from gradient vanishing and exploding when dealing with long-term information, so LSTMs and GRUs are designed to be equipped with the forget gate mechanism that allows the model to remember the key points and forget the insignificance along with the prediction procedure. However, this mechanism is more applicable for the sequences with only one or few key points, e.g., sentences, but does not work well for the sequences with plentiful key points, e.g., users' behavior sequences, in which, the dynamics of users' interests produce the complex dependencies among the items, and further generate many key points in a sequence. This is because the forget gates make the model abandon the information irrelevant to the current and very recent states, but may be useful later. Hence, this operation results in information loss in processing the items, and concludes the fake relationships and dependencies.
To remedy these defects, researchers have attempted to incorporate other advanced techniques into RNN-based models [12][13][14] to boost the memory of RNNs. For instance, [12] combines the global and local RNN-based encoder-decoder architectures with the attention mechanism, so that previous sessions can be involved into the next item prediction to retain both of the long-and short-term important information. In [15], a Recurrent Convolutional Neural Network (RCNN) is created which employs the recurrent architecture of RNNs to remember long-term dependencies and adopts the convolutional operation of Convolutional Neural Networks (CNNs) to retrieve short-term relationships among recurrent hidden states. [13] builds a parallel RNN-based architecture to exploit rich features of items, such as images, texts and videos, in multiple branches, to enhance the knowledge of the recommender system. [14] integrates an external memory based on Key-Value Memory Networks (KV-MN) [16] into a GRU network, which frames the item attributes into pairs of key-value vectors, and employs the outputs of RNN cells as the query to extract the relevant representations into the final prediction.
Nevertheless, these works focus on attaching one or several external components (such as attention, CNNs and KV-MV) to the traditional RNN architectures, in order to take advantage of the external knowledge. The improvement of inner structures of RNNs, including LSTMs and GRUs, is still challenging. Hence, fuelled by the desire for effectively and fundamentally overcoming the challenge, we propose to address these issues via a novel network based on an LSTM, called Long Short-Term enhanced Memory (LSTeM), specific for Sequential recommendation, which boosts the memory mechanism of original LSTMs in two ways.
Firstly, we design a new structure for the gates in LSTMs. The gates are internal components in original LSTMs that regulate the flow of information by deciding which part of data information in a sequence to input, keep or discard. Specifically, they act as filters to take in and pass down the important information, i.e., the information relevant to the key points, and to delete other information considered unnecessary to the decisionmaking process, based on the situation at each time step. The traditional gates are represented as the activated sum of the weighted current input and the hidden states, which input and discard information without considering the specific context at each time step. Inspired by [17], we replace the original addition operations with the product of querykey pairs of the "Q-K-V" triplet, which maps each embedding into a query, a key and a value, and express the dependencies of two items by the compatibility of one's key and the other's query. From the other perspective, as discussed above, a sequence is modeled into a directed graph along the temporal dimension in RNNs, so the query-key pair can be seen as a graph kernel function that transforms the data entities to a common latent space and then calculates their correlation by inner products. Thereby, the new gates are more able to dynamically capture the relationship between the current input and the historical memory at each time step, instead of simply taking their summation. To this end, the personalized item information is introduced at different steps, and the correlation between the item at each step and the previous items are extracted and represented more flexibly and more comprehensively. In other words, the model processes the information in both the item and the memory depending on the relationships instead of mechanically excessively emphasizing the adjacent items. Hence, this strategy alleviates the first problem of RNNs that tends to make point-wise predictions and ignore the collective dependencies.
Secondly, we design a "recover gate" to assist the model to find back the lost information discarded in previous steps, but found useful currently. This recover gate is served with a dynamic global memory embedding generated by a memory function, which is endowed with a Self-Attention mechanism to provide an aggregated expression of all the previous items to the current item. Self-Attention is a promising mechanism designed to summarize the information of a sequence based on the dependencies among its elements [17], through mimicking human's cognitive attention that strengthens the important parts of the data features and the inner correlations within a sequence and fades out the rest. Hence, the key information of a user's sequence can be condensed into a single vector, which is then used by the recover gate.
The main contributions of this paper are listed as follows: -To the best of our knowledge, LSTeM is the very first work to enhance the memory mechanism of LSTMs by proposing a new internal structure of LSTMs specific for sequential recommender systems. The "Q-K-V" mechanism is introduced to design the new gates of LSTMs to boost the ability to capture the information and relationships between items and memories. A recover gate with dynamic global memory embeddings is integrated into the novel structure. -Extensive experiments on four popular sequential recommendation datasets have been conducted to demonstrate the promising performance of LSTeM by comparing it with 9 state-of-the-art methods.
The rest of this paper is organized as follows: In Section 2, we will discuss the related work; in Section 3, we will explicitly introduce the proposed approach LSTeM; in Section 4, the extensive experiments will be discussed, followed by the conclusion of the whole paper in Section 5.

Sequential recommendation
The existing sequential recommendation works can be generally separated into three categories, traditional methods, latent representation approaches and deep neural networkbased models [1]. The traditional methods proposed in the early time mainly focus on frequent pattern mining and Markov Chains (MC), a stochastic model to learn sequential patterns and predict the next item, such as [18,19]. The Markov Chains seem innately applicable to tackle the next item prediction, but this stochastic model only considers the relationships between adjacent items in an interaction history, so MC-based works merely succeed in short-term recommendation [1], but fail in long-term prediction. After that, latent representation approaches sprung up, and hence, researchers shifted their attention to factorization and embedding learning, among which Matrix Factorization (MF) is a popular algorithm. MF decomposes the user-item interaction matrix into the product of two lower dimensionality rectangular matrices to identify the latent correspondence between a user factor and an item factor [20]. Hence, in deep learningbased recommender systems [21][22][23][24], it is a natural thought to build a two-branch network to predict the interaction rate between users' interest representations and items features.
In recent years, driven by the development of deep learning, a large number of neural network-based methods have been proposed in this study domain. The commonly adopted techniques include Recurrent neural networks (RNNs), Convolutional neural networks (CNNs), and Graph neural networks (GNNs).
CNN-based methods, such as [25,26], first assemble the item embeddings into a matrix and implement convolution operations on it. There is an obvious drawback in this class that CNNs are not able to adequately capture the orders in sequences, so the intrinsic sequential relationships are more likely to be ignored. By contrast, GNNs are more strong to model temporal data. The typical GNN-based methods on sequential recommendation tasks build a directed graph, in which the nodes represent items, and the edges are deemed as the dependencies, so that graph theories can be used to analyze the relationships of these components. In that way, a GNN-based model is able to explore complex and implicit relationships among users and items. Hence, this class has a potential on explainable sequential recommendation. RNN-based methods have dominated in the area of sequential recommendation for a long time, since RNNs are born to handle sequential data, by sequentially modeling items in their cells, in order to dig latent patterns. The popular variants of RNNs, LSTMs and GRUs are frequently used in this task, such as [9,10,12,24,[27][28][29]. Nevertheless, as discussed in section 1, RNN-based models still have some drawbacks to overcome for sequential recommendation.
To overcome the limitations, a number of advanced hybrid methods that combine the three basic classes and other techniques have been devised. Attention mechanism is one of the most popular techniques to be adopted in sequential recommendation. In [30], the authors propose a two-layer hierarchical attention network to properly capture the dynamic users' preference, in which the first layer learns the long-term features, while the second layer couples long-term and short-term patterns and outputs users' representation. [31] applies the Self-Attention mechanism created for transformers to allow the model to consider long-range dependencies on dense data, and to pay more attention to recent activities on sparse distribution. Memory networks are another class of techniques to enhance the model's memory capacity, which ordinarily merge an external memory matrix into the original model. For example, [14] adds a Key-Value Memory Network to RNN-based networks, which splits a memory slot into a key vector and a value vector and integrates the key-value pairs with GRU outputs to enhance the semantic representation and capture the sequential user preference at the same time.

Recurrent neural networks (RNNs)
Recurrent Neural Networks (RNNs) are a class of unit-based models for tackling sequences of data entities. The main idea of RNNs is to model a sequence into a directed graph along the temporal dimension, and retrieve the information step by step using the same unit (cell), so that the dependencies among the elements of sequences can be accurately captured. However, vanilla RNNs are not applicable for long-term sequences, and hence, to better deal with the long-term dependencies, two variants of RNNs, the Long-Short Term Memories (LSTMs) [6,7] and the Gated Recurrent Units (GRUs) [8], have been proposed and widely applied, which are featured by the internal memory mechanism. In this paper, we focus on LSTMs, which have an edge over vanilla RNNs due to the property of selectively remembering and forgetting information for both long and short periods of time. An LSTM unit is a neural network containing a group of repeating cell modules, whose main components are three gates, including an input gate, a forget gate and an output gate. The gates serve as filters to determine what information should be allowed to get into the system and be retained or dropped. In this way, only the valid information is passed down the sequence. Nevertheless, there are still a part of useful elements inevitably discarded ahead of the time when they are needed, or even have not gotten into the memory. This is a natural defect of LSTMs, and hence, to allow this class of models to be used in sequential recommendation, many methods have been devised to fill this gap in applications. For example, in [12][13][14][15]32], authors incorporate other advanced techniques, such as attention mechanism, Key-Value Memory Networks and CNNs into RNNs to achieve better performance in sequential recommendation.

Self-attention
First introduced for language processing, the promising mechanism, Self-Attention, has seen widespread use in deep learning, and the applications are not only for NLP, image processing, but also for recommender systems. The main idea of the general attention is to capture intrinsic relationships of the relevant data entities while ignoring others, which works in a similar way to human brain actions. In the early articles, people usually use attention mechanism as a component to build models with other technique frameworks, such as attention with RNNs [12,16], attention with Matrix Factorization, etc. In these works, the important information is properly extracted, summarized and emphasized. In comparison, in recent studies, people apply attention mechanism in transformers, an independent method to deal with the components within an individual sequence. This Self-Attention-based sequence-to-sequence model relates elements in different positions to each other in a sequence, depending on their dependencies, in order to gain the aggregated representation of this sequence or every element.
For instance, [31] constructs a Self-Attention-based model that adds adaptive weights to the current item according to its relationships with previous items at each time step. [33] employs the deep bidirectional Self-Attention paradigm, BERT [34], to model users' behavior sequences and predict the random masked items by jointly depending on their left and right context. On the foundation of it, [35] merge side information of items into the queries and keys of BERT framework to generate better attention distribution so that information overwhelming can be avoided.

Long short-term enhanced memory
In this section, we will show the details of our proposed Long Short-Term enhanced Memory (LSTeM) for sequential recommendation. We will first briefly introduce the key ideas of this method and the problem formulation, and then describe the structure of LSTeM in detail.

Motivation
Previous works [12,13,15,36,37] have already shown that RNN-based models, including LSTMs, can carry out sequential recommendation tasks, and they have made attempts to introduce other advanced techniques to solve the inherent problems of RNNs. These paradigms work as external components, so that the performance of the models can be boosted by taking advantage of the external knowledge. In this work, we consider to overcome these challenges by improving the intrinsic property of the RNNs and propose a Long Short-Term enhanced Memory (LSTeM) to enhance the performance of LSTMs in sequential recommendation. Specifically, we merge the "Q-K-V" mechanism, which is conventionally applied in Self-Attention, into LSTM cells to create a novel structure of gates and improve the model's memory. In addition, we add a component, the recover gate with a global memory embedding, which is a Self-Attentional representation of previous items in a sequence, to find back the lost but useful information. The whole architecture of LSTeM is illustrated in Figure 1.

Problem formulation
A dataset of users' historical actions consists of M users' temporally ordered user-item interaction sequences, which is denoted as S (u) M u=1 . A sequence comprises k items and is denoted as where k is a variable number, which means that the length of a sequence is dynamic.
, where e j N j=1 is the set of all N items in the dataset, and t is the time step index of the item being interacted with in a user's behavior sequence. In sequential recommendation, given a user's interaction history with t time steps, the model aims to predict e (u) t+1 , i.e., the item this user interacts with at the next step t + 1.

Framework
As described in Sects. 1 and 2, an original LSTM model is a neural network composed of a chain of repeating cell modules that allows a sequence of items to pass into the model one by one. Each cell has a three-gate structure to decide what information to take in, remove or retain. Specifically, given an input x t at time t, and the hidden state h t−1 and memory c t−1 from last time step, the model works as follows: where i t , f t and o t are the input (i.e., update) gate, the forget gate and the output gate, respectively; c t and c t−1 are the long-term memory cell state at step t and step t − 1 ; s and t are sigmoid function and hyperbolic tangent function. Here, the three gates (Eq. 1, 2 and 3) and the current memory cell c t (Eq. 4) are represented as the activated sum of the weighted current input x t , and last hidden state h t−1 . In Eq. 5, the forget gate f t multiplies c t−1 to determine what to discard from the history memory, and the input gate i t times c t to compute what to input into the memory, while the sum of these two terms is the current memory cell state c t . The current hidden state h t is the product of the output gate o t and activated c t (Eq. 6). Here, U ( * ) and W ( * ) are the parameters of the model.
In LSTeM, we employ a novel LSTM architecture as the basis. Each novel cell is composed of four gates, including an input gate, a forget gate, a recover gate and an output gate. Among them, the input gate, the forget gate and the output gate are variants of LSTMs' original components, while the recover gate is a new part fused into the original structure.
Concretely, for a user's action history S (u) , at time step t, the item e (u) t in the sequence are firstly converted into an embedding and then fed into the cell, together with the hidden state h t−1 and long-term memory c t−1 from the previous time step, as well as a global historical memory embedding m t generated by a Self-Attention model. The cell computes the four gates and the corresponding values using the "Q-K-V" mechanism, and then outputs the updated hidden state h t and the updated memory c t . The hidden state h t can be regarded as the representation of the user's interest at the current time t to predict the next item. The details are described in the following sections.

Embedding layer
To allow data instances to pass into the model in mini-batches, we first transform each user's behavior history to a fixed-length embedding sequence , where n is the fixed length of the sequence. If the number of historical points is less than n, we insert padding items in the front of the sequence, while if the number is larger than n, we only keep the last n items. We create an embedding matrix E ∈ ℝ N×d , where d is the latent dimension of the embeddings. Here, we use a constant vector zero to stand for a padding entity. We put the uniform-length sequences into an embedding lookup table that stores embeddings of all items in a dataset, to get the representation matrix. In summary, the embedding matrix of an individual sequence S (u) is denoted as E (u) ∈ ℝ n×d .

The new structures of gates
As described above, the vanilla LSTMs have three gates, including an input gate, a forget gate and an output gate, and each gate is represented as (U ( * )ê (u) t + W ( * ) h t−1 ) , where U ( * ) and W ( * ) are the parameters of the projection functions. Then the gates function as the filters to multiply the corresponding components, in order to accept or discard information. We argue that this operation is not adequate to perform the personalized information extraction and to convey the dynamic interests of users, in terms of a time step. This is because the addition of the two vectors (i.e., the current interacted item and the historical memory) tends to present the summation of their semantic content, rather than the filtered information relevant to each other. In other words, a gate treats all information contained in an embedding equivalently, no matter how related it is to the situation at the current step, which results in that the adjacent item receiving excessive consideration (due to the short-term memory h t−1 is highly related to the adjacent item) while other information is ignored. In contrast, the "Q-K-V" mechanism selectively extracts the information from ê (u) t and h t−1 ) , which emphasizes the correlation between the item at this step and the memory, and neglects the irrelevant content, in respect of the objective of the gate. Therefore, we use the "Q-K-V" mechanism to build the gates' structures.
The "Q-K-V" mechanism is a paradigm firstly introduced to Self-Attention (discussed in Section 2.3) by [17]. The main idea is to encode each element x i within a sequence to a triplet made up of a query q i , a key k i and a value v i , and then represent the correlation ij between two elements, x i and x j by the normalized product of one's query q i and the other's key k j , and output the weighted value ( ij v j ) . The formula for the whole sequence is and d k is the dimensionality of the key K. In practice, softmax function is a commonly used compatibility function to normalize the attention weights, so that the sum of the elements' weights equals one. These normalized weights represent the relevant proportions of other elements.
In LSTeM, we leverage the "Q-K-V" mechanism to enhance the ability of capturing the correlation between the item embedding and the memory of the model. Specifically, at time t, given the current item ê (u) t , the hidden state h t−1 and the memory c t−1 from the previous step, and the global memory m avg t−1 (explained in Section 3.3.3), we map these embeddings to different groups of query, key and value for the corresponding gates. It is worth emphasizing that we only apply the form of "Q-K-V" mechanism to capture the correlations and get the corresponding values in LSTeM instead of using the Self-Attention mechanism, because we only need to get the correlations of these embeddings and do not need to compute the relevant proportion of each pair of them.
Hence, we substitute sigmoid function for the softmax function in Self-Attention to normalize the gate weights to [0, 1]. For the purpose of simplification and clearness, in the following part, we omit the annotation (u), and apart from the training parameters, all the components illustrated are for an individual user.
The input gate is in control of the information accepted into the memory at the current step derived from two sources, the input item ê t and the hidden state h t−1 (i.e., the most recent historical memory). Therefore, the gate is defined as the sum of two products: one is the product of the query of ê t and the key of h t−1 , which indicates what information in the short-term memory is related to this item; on the contrary, the other is the product of the query of h t−1 and the key of ê t , which presents what characteristics of this item interest the user. The "Q-K-V" triplet of ê t for the input gate can be denoted as (q t,i,e , k t,i,e , v t,i,e ) , and the triplet of h t−1 is (q t,i,h , k t,i,h , v t,i,h ) . Here, i represents the input gate, t is the time step, and e and h denote the original embedding of the triplets, which are the item embedding and the hidden state, respectively. However, for the sake of simplicity, we use a single vector to serve as both of the query and the key of ê t or h t−1 for computing the reciprocal dependency with them each other, so the simplified formula is where qk t,i,e = U (i)ê t is the query and key of ê t ; qk t,i,h = W (i) h t−1 is the query and key of h t−1 , and d is the dimension of the embeddings; v t,i,e = P (i,e)ê t and v t,i,h = P (i,h) h t−1 are the values. ( W ( * ) , U ( * ) and P ( * ) are trainable parameters.) In this way, the information accepted into the long-term memory c t can be computed as the sum of the two triplets of "Q-K-V" mechanism.
For the forget gate that is in charge of the information which needs to be removed from the long-term memory c t−1 , we also use a "Q-K-V" triplet to build it. To be specific, this operation is represented as , and "f" represents the forget gate. It should be noted that, here, we use the key of h t−1 to replace the key of c t−1 , in view of h t−1 is a selective projection of c t−1 , which emphasizes most recent information (explained in section 3.3.4). Besides, c t−1 is employed to be the value of itself.
In this manner, c t and c t−1 get the semantic information imported from ê t and kept in h t−1 , respectively, which are specific for the current time step. In other words, they imply that "according to the user's current interest, which information should be noticed, obtained and passed down from this step and the most recent steps (i.e., the short-term memory); and which part should be removed from the long-term memory." Here, we rethink on the problems of RNN-based architectures for sequential recommendation discussed in Section 1. By the "Q-K-V" mechanism, the gates in LSTeM filter information depending on the dependencies between the current item and the historical memory, so that the personalized information of the item is introduced at different steps, and the connections between this item and the previous items are captured more flexibly and more comprehensively. That is to say, the model does not accept the same information from the item in different steps, and processes the information in the memory dynamically depending on the relationships instead of excessively emphasizing the adjacent item. Hence, this strategy alleviates the problem that RNNs tend to make point-wise predictions and ignore the collective dependencies among the whole sequence. However, it is still difficult for the model to tackle the long-term sequence, since the selectively updating and forgetting mechanism is naturally likely to abandon the information not needed in the recent and current time steps. To fill this gap, we design a recover gate, working with a global memory.

The recover gate
To allow the model to find back the memory lost along the way, we introduce a global memory that contains the integrated information extracted from the history to serve as the source for memory recovering. In order to fuse the global memory into the model, it is represented as a vector generated from all of the previous item embeddings in the sequence by using a Self-Attention method.
As described in Section 2.3, Self-Attention works within an individual sequence to relate elements in different positions, depending on their dependencies, to extract the aggregated representation of this sequence or every element. Given a sequence X = x i n i=1 with n elements, for an element x q , the aggregated expression is where (x q , x i ) is an attention function to calculate the relevant weights of each x i for x q , based on the dependency between x i and x q .
In LSTeM, we use a simple Self-Attention manner to generate an integrated memory vector in LSTeM. At time t, given the embedding matrix of the previous items, E t−1 ∈ ℝ (t−1)×d , we first create the affinity that represents the dependencies among the individual items, and compute the weighted sequence embedding. Then we implement the Global Average Pooling (GAP) along the sequence to construct a compact global memory embedding m avg t−1 . The formulas are m avg t−1 is an aggregated embedding of all the previous items, and aids the model to find back the lost history that has been discarded by the forget gate during the previous steps, but needed at the current step. Here, we continue to use the "Q-K-V" mechanism on ê t and m avg t−1 to construct the recover gate: where q t,r,e = U (r)ê t , k t,r,m = W (r) m avg t−1 , v t,r,m = P (r) m avg t−1 and "r" represents the recover gate.
After the input gate, the forget gate and the recover gate are generated and used, we add the information accepted into the memory at this step together:

The output gate
The last gate is the output gate that assesses the informativeness of the cell output, i.e., which part of the c t is exported. Since this gate is closely related to both the current state and the global memory, it is determined by the product of three query-key pairs, including the reciprocal query-key pair of ê t and h t−1 , namely, where h t is the output of an LSTeM cell, i.e., the hidden state of time t.

Prediction layers
After the hidden state of the current step h t is obtained, we pass it into a dual-layer feedforward network to get the final user's interest representation at time t: where W (p * ) ∈ ℝ d×d are the trainable parameters, is the Rectified Linear Unit (ReLU) activation function between two feed-forward layers, and ĥ t is the vector that indicates the user's current preference.

Layer normalization and dropout
To accelerate and stabilize the training process and help to avoid gradient errors [38], layer normalization statistic is used in the LSTeM cell to normalize the item embeddings and the hidden states into a zero-mean and unit-variance space. Specifically, given an input vector x, the operation is formulated as: where and are the mean and variance of x, while g and b are scale and bias parameters to learn.
We also use dropout [39] in each feed-forward layer in LSTeM. Dropout regularization allows the model to simulate a number of different architectures by randomly turning off network nodes during the training process. It is used to reduce overfitting and to improve the generalization of deep learning models. In LSTeM, we apply dropout layers after the embedding layer and between the two prediction layers.

Recommendation and objective function
Recall that LSTeM consists of an embedding layer, a new LSTM structure, and a dual-layer prediction network. For each step time t, the current implicit user's interest embedding p t ∈ ℝ d is extracted from the current interacted item and the historical memory of the sequence by these model components. To train the model, we adopt Matrix Factorization (MF) to rate the probability of a set of candidates to be the next item in the user's sequence. The candidate set comprises C items, including a positive one (i.e., the real next item), and (C − 1) randomly selected negative ones.
As introduced in Section 2, MF is a commonly used method for recommendation, in which the user-item interaction rates are predicted as their inner products. Therefore, the candidates are fed into the embedding layer and converted to the item embeddings , where x i ∈ ℝ d , and the probability that the user interacts with x i at next time step is expressed as We then use Binary Cross Entropy as the objective function to calculate the loss. Here, the real next item is deemed as the positive answer, while the random candidates are the negative ones. The loss is computed as: Here, we get rid of the padding items using a mask to avoid the influence on the prediction accuracy.
After the training process, we test the model with the same strategy for candidates as in training. For a single sequence, We continuously pass all the items except the last one through the model, and the user's interest embedding at the second last time step p n−1 is retained to test with the candidates for the last step.

Discussion
In LSTeM, we design the new gates for LSTMs with the "Q-K-V" mechanism, which filters information based on the dependencies between the item and the memory. Therefore, at different time steps, the input information of an item is personalized, and the model can capture the relationships between the current item and the previous items more flexibly and more comprehensively than the original LSTMs. This strategy alleviates the first problem discussed in Section 1 that RNNs tend to make point-wise predictions and ignore the collective dependencies among the whole sequence.
Moreover, we propose a recover gate, which integrates the information of previous items into a global memory embedding by the simple Self-Attention method, to overcome the second problem that the forgetting mechanism of original LSTMs tends to discard the information not needed in the recent and current time step but important for afterward time steps.

Experiment
In this section, we show the experiments on LSTeM, including the experiment setting, the baselines, the datasets and the results. Also, we analyze the effectiveness of our method and have a discussion on the ablation studies. The experiments are designed to answer the following questions:

Datasets
We evaluate LSTeM on four datasets, which remarkably vary in domain, platform, and data sparsity: -MovieLens: A popular benchmark dataset in the domain of recommendation, which collects users' actions on the movie website. In LSTeM, We employ two stable ver- sions, MovieLens 1m (ML-1m) 1 and MovieLens 20m (ML-20m) 2 . ML-1m is the densest dataset in our evaluation. -Amazon 3 : A series of product purchase and review datasets collected from Amazon.
com. It is characterized by high sparsity and variability. Amazon consists of several subsets depending on the product categories, and in our experiments, we adopt "Beauty". -Steam 4 : A rich user-item interaction dataset gathered from Steam, which is a large cross-platform video game distribution system, created in 2016.
We process the datasets by following the common practice employed in [18,25,31,33]. For all datasets, the user-item interactions, including numeric ratings and the presence of reviews, are considered as implicit feedback. If a user has a piece of implicit feedback for an item, that means this user interacts with it. We build the users' behavior sequences for all users in the order of feedback timestamps. Besides, the sequences whose length is less than five are discarded to ensure the quality of the datasets. We separate each sequence into three parts: the last item of each sequence belongs to the test sets; the second last one is for validation; the rest are used in training. The data statistics are shown in Table 1. It can be seen that Beauty is the most sparse dataset, whose average length of user behavior sequences is only 6.02, while ML-1M is the densest one that has 163.5 items in a sequence in average.

Evaluation metrics
We evaluate our model with three metrics, Hit Rate (HR), Normalized Discounted Cumulative Gain (NDCG), and Mean Reciprocal Rank (MRR). For the TOP-K metrics, Hit Rate@K and NDCG@K, we take K = 1, 5, 10. Since there is only one ground truth in each sequence, here, HR@K can be regarded as Recall@k and in proportion to Precision@k. We omit NDCG@1, which equals to HR@1. To be efficient, we follow [14,22,31,40] to adopt the strategy that randomly samples 100 negative items that the user has not interacted with, together with the positive item, to be the test candidates. In addition, to ensure the test results are reliable and objective, we refer to [14,33], and sample half of the negative items based on their popularity, i.e., the probability of appearing in all actions.

Baselines
To evaluate the performance of LSTeM, we conduct the experiments on the baselines that can be separated into two categories. The first category contains the traditional methods on recommender systems, which only make the prediction based on users' feedback without considering their temporal order, or only depending on the adjacent items: -POPRec: Global Popularity is a naive method that simply sorts the item based on their popularity in the dataset in descending order. -BPR-MF [41]: Bayesian Personalized Ranking is a classic approach in recommendation.
It is designed to learn to rank the items based on a pairwise personalized ranking loss derived from the maximum posterior estimator. -FPMC [18]: Factorized Personalized Markov Chains is a hybrid model that merges matrix factorization and factorized Markov chains to build individual transition matrix for each user, in order to predict the next items in users' action sequences. -NCF [22]: Neural network-based Collaborative Filtering leverages a multi-layer perception model to learn the prediction of users' interest in items. It is a classic method to use MLP in the traditional approach, matrix factorization.
By contrast, the methods in the second category are based on deep neural networks. At least several of the items previous to the current item and the temporal orders in a sequence are considered in prediction.
-GRU4Rec [9]: This is a typical work that applies recurrent neural networks (RNNs) on sequential commendation. It designs the session-parallel mini-batches to model the whole sessions into GRUs to predict the next items. -GRU4Rec + [10]: This is an advanced variant of GRU4Rec, which proposes a new set of ranking loss functions and the sampling strategy that provides top-k gains to improve the performance. -Caser [25]: Convolutional Sequence Embeddings is a model based on a CNN architecture. It adopts convolutional operations on the item embedding matrix in both horizontal and vertical directions to model high-order Markov chains for sequential recommendation. -SASRec [31]: Self-Attentive Sequential Recommendation applies the left-to-right Self-Attention mechanism on recommendation, which works well in handling the relations among items with both long and short intervals. -RCNN [15]: Recurrent Convolutional Neural Network leverages the recurrent architecture of RNNs to retrieve long-term dependencies of items and conducts the convolutional operation of CNNs to extract short-term relationships among recurrent hidden states. It is noteworthy that, for the sake of fairness, we change the output dimension of RCNN's last fully-connected layer to the same as the item embeddings, and modify the objective function to the same as LSTeM, so that we can use our evaluation metrics to test it.

Implementation details
We implement LSTeM with PyTorch 5 and initialize the parameters with normal distribution in the range [ − √ d , where d is the dimensionality of item embeddings. The model is optimized by the Adam optimizer [42], which is a combination of AdaGrad and RMSProp algorithms. Adam takes an adaptive moment estimation strategy to compute individual learning rates for model parameters. Hence, it performs better in dealing with the sparse gradients on noisy problems. The learning rate of Adam optimizer is set to 1e − 3,and 1 = 0.9 , 2 = 0.999 . We set the batch size to 128. Depending on the density of datasets, the dropout rates for ML-1m and ML-20m are set to 0.2, while for other datasets, the rates are set to 0.5. The fixed sequence lengths n are different based on the average length of users' action histories, so we set 200 for ML-1m and ML-20m, 50 for others. The sensitivity of the dimension of item embeddings is examined in the experiments and we discuss this in 4.6.1. For the sake of fairness, we reproduce the codes of the baselines in PyTorch, referring to their papers and original codes (if published).

Performance analysis
To answer Q1, we compare LSTeM with 9 existing works, and the evaluation results are shown in Table 2. It can be observed that LSTeM relatively performs the best in all methods. On the whole, it is obvious that all the approaches do better on the denser datasets (i.e., ML-1m and ML-20m) than on the sparser ones (i.e., Beauty and Steam). The methods based on deep neural networks (i.e., GRU4Rec, GRU4Rec + , Caser, RCNN, SASRec and LSTeM) perform better than the traditional approaches (i.e., BPR-MF, FPMC and NCF) and the naive method (i.e., POPRec). This is not only due to the inherent advantages of deep learning, but also because the deep-neural-network group considers the temporal orders of items and involves at least several previous items to predict the next item, while the traditional approaches do not take into account the positions of the items in the sequence, or even only considers the neighboring items in the sequences. This observation verifies that any previous item may have an effect on the current time step. Besides, POPRec gets the worst results because of its non-individuality, which demonstrates that users' preference is dynamic, complex and widely different.
As to LSTeM, compared to the state-of-the-art baselines, SASRec, RCNN, Caser and GRU4Rec + , it gains relative improvements in both sparse datasets (i.e., Beauty and Steam) and the dense datasets (i.e., ML-1m and ML-20m). Among them, the most significant improvement can be found in ML-20m, which demonstrates that LSTeM has the stronger ability to take the advantage of large data volume. In addition, as an RNN-based model, LSTeM shows a distinct enhancement in comparison to RCNN, GRU4Rec and GRU4Rec + . This indicates that, to some extent, the improvement of the internal structure in RNNs achieves a satisfactory result and we show and analyze more testing results in this respect in Section 4.6. It is noteworthy that, The performance of LSTeM and SASRec are close, and especially they are level pegging on Beauty. However, as a model based on Self-Attention, SASRec is naturally disadvantaged in terms of perceiving and making use of the item position and order information in users' interaction sequences, which is quite important for capturing users' interests. By contrast, LSTeM, an RNN-based method, has the absolute Table 2 Performance of LSTeM and baselines on four public datasets advantage of processing item positions and orders, and there is plenty of room for development on the gates and memory mechanism. Hence, it can be expected that future versions of LSTeM have the potential to achieve even better performance.

Ablation study
To answer Question 2, 3, 4 listed at the beginning of this section, we conduct a series of experiments to comprehensively examine the effectiveness of LSTeM in sequential recommendation, including three ablation studies: the sensitivity of the item embedding dimensionality, the efficacy of the recover gate, and the validity of the new gates with the "Q-K-V" mechanism.

Sensitivity of embedding dimensionality
The sensitivity of item embedding dimensionality (Q4) in five deep neural network-based methods is shown in Figure 2. We evaluate HR@10 and NDCG@10 of these methods with the item embedding dimension 10, 25, 50, 100 and 200, respectively. Overall, it can be seen that, for the sparse datasets, the larger the embedding dimension is, the better performance can be achieved, and this variate has a relatively heavier effect on Beauty than Steam. For the dense datasets, the models based on CNNs, such as RCNN and Caser, are very sensitive to the embedding dimension, because the larger embeddings allow the convolutional filters to receive more information. Moreover, the performance of LSTeM steadily increases with the embedding dimension getting larger from 10 to 50, but drops down between 50 and 100, and then gradually levels off. This phenomenon demonstrates  that LSTeM tends to converge as the dimensionality rises to 50. Therefore, in practice, for learning efficiency, there is no need to take a very large dimensionality.

Effectiveness of the recover gate and the new gate structures
To explore the effect of the recover gate (Q3) and the new gate structures (Q2) in LSTeM, we introduce three variants of LSTeM, and evaluate them and the traditional LSTM with NDCG@10 in the four datasets. The results are shown in Table 3. The first variant (No r-gate) removes the recover gate along with the global memory vector, namely, the variant model only retains three gates, input, forget and output. Here, for the output gate, we use The second variant (v-gates + sa) employs the vanilla addition operations within all gates as the same as traditional LSTMs, but also has the recover gate with the global memory, and the recover gate is formulated as The last variant (v-gates + mean) is the same as the second one apart from the global memory. Here, we generate the global memory vector simply using the element-wise mean of all (t − 1) item embeddings, instead of Self-Attention. From Table 3, it can be seen that the complete LSTeM obviously performs the best in all models, and all variants are better than the traditional LSTM. The variant without r-gate has an obvious decrease in performance, namely, the recover gate has a significant effect on the model's memory, which verifies our motivation to enhance the memory using the recover gate. Besides, both of the variants with vanilla gates perform worse than the complete model, which validates that the new gate structures with the "Q-K-V" mechanism contribute a distinct improvement in interpreting the dependencies between items and memories. The substantial difference between v − gates + sa and v − gates + mean shows that the utilization of Self-Attention in the global memory generation makes a big contribution toward capturing the latent relationships among items.
In addition, the time costs of LSTeM, LSTM and the three variants are also evaluated. All experiments are conducted with the same GeForce RTX 2080 Graphics Card. Table 4 shows the average training time costs for one epoch of the five models in the four datasets.  It can be seen that the time cost of LSTeM is acceptable, but there is still room for improvement. It is worth noting that the difference between LSTeM and the variant with vanilla gates (v-gates + sa) and the difference between the variants with Self-Attentional global memory (v-gates + sa) and mean-global memory (v-gates + mean) are both small, which means that the time cost of the "Q-K-V" mechanism and Self-Attention are relatively low.

Conclusion
In this work, we propose a novel Long Short-Term enhanced Memory model for sequential recommendation. It is the very first work to boost LSTMs' internal memory by proposing the new structures for gates. We apply the "Q-K-V" mechanism in the gates to capture the latent dependency between the current item and the historical memory, and also develop a recover gate to remedy the inadequacy of memory caused by the forgetting mechanism. The recover gate works with a dynamic global memory embedding generated by using a Self-Attention model. Extensive experiments have demonstrated that LSTeM achieves comparable performance to the state-of-the-art methods on the challenging datasets. Results of the ablation studies further verify the effectiveness of the new gate structures and the recover gate.
Funding Open Access funding enabled and organized by CAUL and its Member Institutions

Conflict of interest
The authors declare that they have no conflict of interest.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/.