LOB-Based Deep Learning Models for Stock Price Trend Prediction: A Benchmark Study

The recent advancements in Deep Learning (DL) research have notably influenced the finance sector. We examine the robustness and generalizability of fifteen state-of-the-art DL models focusing on Stock Price Trend Prediction (SPTP) based on Limit Order Book (LOB) data. To carry out this study, we developed LOBCAST, an open-source framework that incorporates data preprocessing, DL model training, evaluation and profit analysis. Our extensive experiments reveal that all models exhibit a significant performance drop when exposed to new data, thereby raising questions about their real-world market applicability. Our work serves as a benchmark, illuminating the potential and the limitations of current approaches and providing insight for innovative solutions.


Introduction
Predicting stock market prices is a complex endeavour due to myriad factors, including macroeconomic conditions and investor sentiment [1].Nevertheless, professional traders and researchers usually forecast price movements by understanding key market properties, such as volatility or liquidity, and recognizing patterns to anticipate future market trends [2].Effective mathematical models are essential for capturing complex market dependencies.The recent surge in artificial intelligence has led to significant work in using machine learning algorithms to predict future market trends [3][4][5].Recent Deep Learning (DL) models have achieved over 88% in F1-score in predicting market trends in simulated settings using historical data [6].However, replicating these performances in real markets is challenging, suggesting a possible simulation-to-reality gap [7,8].
In this paper, we benchmark the most recent and promising DL approaches to Stock Price Trend Prediction (SPTP) based on Limit Order Book (LOB) data, one of the most valuable information sources available to traders on the stock markets.Our benchmark evaluates their robustness and generalizability [9][10][11].In particular, we assess the models' robustness by comparing the stated performance with our reproduced results on the same dataset FI-2010 [12].We also assess their generalizability by testing their performance on unseen market scenarios using LOBSTER data [13].We focus on novel data-driven approaches from Machine Learning (ML) and DL that analyze the market at its finest resolution, using high-frequency LOB data.In this work, we formally define the SPTP problem considering a ternary trend classification.Our findings reveal that while best models exhibit robustness, achieving solid F1-scores on FI-2010, they show poor generalizability, as their performance significantly drops when applied to unseen LOBSTER market data.
Limit Order Book A stock exchange employs a matching engine for storing and matching the orders issued by the trading agents.This is achieved by updating the so-called Limit Order Book (LOB) data structure.Each security (tradable asset) has a LOB, recording all the outstanding bid and ask orders currently available on an exchange or a trading platform.The shape of the order book gives traders a simultaneous view of the market demand and supply.Figure 1: An example of LOB.
There are three major types of orders.Market orders are executed immediately at the best available price.Limit orders, instead, include the specification of a desired target price: a limit sell [buy] order will be executed only when it is matched to a buy [sell] order whose price is greater [lower] than or equal to the target price.Finally, a Cancel order removes a previously submitted limit order.Figure 1 depicts an example of a LOB snapshot, characterized by buy orders (bid) and sell orders (ask) of different prices.A level, shown on the horizontal axis, represents the number of shares with the same price either on the bid or ask side.In the example of Figure 1, there are three bid and three ask levels.The best bid is the price of the shares with the highest price on the buy side; analogously, the best ask is the price of the shares with the lowest price on the bid side.When the former exceeds or equals the latter, the corresponding limit ask and bid orders are executed.The LOB is updated with each event (order insertion/modification/cancellation) and can be sampled at regular time intervals.
We represent the evolution of a LOB as a time series L, where each L(t) ∈ R 4L is called a LOB record, for t = 1, . . ., N , being N the number of LOB observations and L the number of levels.In particular, L(t) = {P s (t), V s (t)} s∈{ask,bid} , where P ask (t), P bid (t) ∈ R L represent the prices of levels 1 to L of the LOB, on the ask (s = ask) side and bid (s = bid) side, respectively, at time t.Analogously, V ask (t), V bid (t) ∈ R L represent the volumes.This means that for each t and every j ∈ {1, . . ., L} on the ask side, V ask j (t) shares can be sold at price P ask j (t).The mid-price m(t) of the stock at time t, is defined as the average value between the best bid and the best ask, m(t) = P ask (t)+P bid (t)

2
. Mid-prices are synthetic values that are commonly used as indicators of the stock price trend.In average, if most of the executed orders are on the ask [bid] side, the mid-price increases [decreases] accordingly.
Trend Definition We use a ternary classification for trends: U ("upward") if the price trend is increasing; D ("downward") for decreasing prices; and S ("stable") for prices with negligible variations.Thanks to their informativeness, mid-prices are well-suited to drive this classification.Nevertheless, because of the market's inherent fluctuations and shocks, they can exhibit highly volatile trends.For this reason, using a direct comparison of consecutive mid-prices, i.e., m(t) and m(t+1), for stock price labelling would result in a noisy labelled dataset.As a result, labelling strategies typically employ smoother mid-price functions instead of raw mid-prices.Such functions consider mid-prices over arbitrarily long time intervals, called horizons.Our experiments adopt the labelling proposed in [12] and repurposed in several other state-of-the-art solutions we selected for benchmarking.The adopted labelling strategy compares the current mid-price to the average mid-prices a + (k, t) in a future horizon of k time units, formally: The average mid-prices are used to define a static threshold θ ∈ (0, 1) that is used to identify an interval around the current mid-price and define the class of the trend at time t as follows: With this labelling, we beat the effect of mid-price fluctuations by considering their average over a desired horizon k and considering a trend to be stable when the average mid-price variations do not change significantly, thus avoiding over-fitting.We highlight that time stamps t can come either from a homogeneous or an event-based process.In our experiments, we consider an event-based process.
Models I/O Given the time series of a LOB L and a temporal window T = [t − h, t], h ∈ N, we can extract market observations on T , M(T ), by considering the sub-sequence of LOB observations starting from time t − h up to t.In Section 1 of the Supplemental Material (SUP), we give a representation of a market observation M(T ) ∈ R h×4L .The market observation over the window [t − h, t] is associated with the label computed through Equations 1 and 2 at time t.An SPTP predictor takes as an input a market observation and outputs a probability distribution over the trend classes U, D, and S.

Experiments
We conducted an extensive evaluation to assess the robustness and generalizability of 15 DL models to solve the SPTP task, as presented in Section 3.Among these, 13 were SOTA models, and 2 DL baseline models commonly used in the literature.More details on the models are given in Section 4.2.
In line with many other studies, we adopt the definition of robustness and generalizability introduced by J. Pineau et al. in their work [9].Robustness is evaluated by testing the proposed models on FI-2010, a benchmark dataset employed in all surveyed papers.In some cases, the authors of the considered works have not provided crucial information, such as the code or the hyperparameters of their models, making reimplementation and hyperparameter search necessary.We refer to Section 5.1 in SUP for a complete description of the hyperparameters search.To evaluate the generalizability, we created two datasets called LOB-2021 and LOB-2022, extrapolated from the LOBSTER dataset [13].We describe these datasets in Section 4.1.
Our experiments were carried out using LOBCAST [25], the open-source framework we make available online.The framework allows the definition of new price trend predictors based on LOB data.More details on the framework are given in Section 4.3.

Datasets
LOB data are not often publicly available and very expensive: stock exchanges (e.g., NASDAQ) provide fine-grained data only for high fees.The high cost and low availability restrict the application and development of DL algorithms in the research community.
The most widely spread public LOB dataset is FI-2010 which is licensed under Creative Commons Attribution 4.0 International (CC BY 4.0) and was proposed in 2017 by Ntakaris et al. [12] with the objective of evaluating the performance of machine learning models on the SPTP task.The dataset consists of LOB data from five Finnish companies: Kesko Oyj, Outokumpu Oyj, Sampo, Rautaruukki, and Wärtsilä Oyj of the NASDAQ Nordic stock market.Data spans the time period between June 1st to June 14th, 2010, corresponding to 10 trading days (trading happens only on business days).About 4 million limit order messages are stored for 10 levels of the LOB.The dataset has an event-based granularity, meaning that the time series records are not uniformly spaced in time.LOB observations are sampled at intervals of 10 events, resulting in a total of 394,337 events.This dataset has the intrinsic limitation of being already pre-processed (filtered, normalized, and labelled) so that the original LOB cannot be backtracked, thus hampering thorough experimentation.Additionally, the labelling method employed is found to be prone to instability, as demonstrated by Zhang et al. in [26].Moreover, the dataset is unbalanced at varying prediction horizons.Varying the horizon k ∈ K = {1, 2, 3, 5, 10}, the stationary class S is progressively less predominant in favour of the upward and downward classes.For instance, the class composition for different values of k is k = 1, U: 18%, S: 63%, D:19%; k = 5, U: 32%, S: 35%, D:33%; k = 10, U: 37%, S: 25%, D:38%.
To test the generalizability of the models in a more realistic scenario, we used data extracted from LOBSTER [13], an online LOB data provider for order book data, which is not available for free, as is often the case for critical applications such as health and finance [9].The data are reconstructed from NASDAQ traded stocks and are publicly available for the research community with an annual fee.To compare the performance of the algorithms in a wide range of scenarios, we have created a The selection of these two periods aimed to capture data from periods with different levels of market volatility.February 2022 exhibited higher volatility compared to July 2021, largely influenced by the Ukrainian crisis.This allows for an assessment of models across varying market conditions.We describe in detail our stock selection procedure in Section 3 in SUP.
Datasets for the Generalizability Study Due to copyright reasons, we are unable to release the LOB-2021 and LOB-2022 datasets.However, in Section 4 in SUP, we provide detailed insights into how they are generated, ensuring transparency and replicability in future research.The approach we adopt to generate both datasets closely follows the creation process presented for FI-2010 in [12].In summary, for each considered stock s, we construct a stock time series of LOB records L s (t) ∈ R 4L , with L = 10.To resemble the FI-2010 structure, we sample the market observation every 10 events and split records into training, validation, and testing sets using a 6-2-2 days split 3 .Normalization is performed on stock time series using a z-score approach, and the dataset is labelled by leveraging the trend definitions described in Equation (2).Lastly, both LOB-2021 and LOB-2022 contain prediction labels for each one of the considered horizons K.

Models
We have selected 13 SOTA models based on DL for the SPTP task.These models were proposed in papers published between 2017 and 2022 and utilized datasets LOB data for training and testing.In addition to the models proposed in the selected papers, we also included two classical DL algorithms, namely Multilayer Perceptron (MLP) and Convolutional Neural Network (CNN), which were used as a benchmark in [27] and in [31], respectively.All proposed models are based on DNNs and were originally trained and tested on the FI-2010 dataset.
A comprehensive summary of the benchmarked models can be found in Table 1, while for additional details, we refer the reader to Section 2 in SUP.In Table 1, the temporal shape represents the length of the input market observation for the model.The features shape refers to the number of features used by the models to infer the trend in the original papers.In the Table , we also indicate whether the authors released the code, and if so, whether they have used PyTorch (PT) [38] or Tensorflow (TF) [39].This is relevant because to ensure consistency and compatibility within our proposed framework, based on PyTorch Lightning, we found it necessary to re-implement models for which the code was not available or was only available in Tensorflow.To improve the reproducibility of the results, it is advisable for the research community to publish the code developed.
In High-Frequency Trading (HFT) and algorithmic trading in general, minimizing latency between model querying and order placement is of utmost importance [40].To explore this aspect, we an-alyzed the inference time in milliseconds of all models, based on the experiments reported in Section 4.4.As shown in Table 1, DEEPLOB, DEEPLOBAT, AXIALLOB, TRANSLOB, and ATNBoF had inference times in the order of milliseconds, potentially unsuitable for HFT applications compared to other models with shorter times.Finally, we have reported the number of trainable parameters for each model.A noteworthy observation is that the average number of parameters is very low compared to other classical fields, such as computer vision [41] and natural language processing [42,43].This leads us to conjecture that current systems are inadequate in effectively handling the complexity of LOB data, as we will verify in the rest of this paper.
To explore the possibility of achieving new SOTA performance by combining the predictions of all 15 models, we have implemented two ensemble methods: MAJORITY, which performs a majority voting weighted by the F1-Score of the predictions made by all the models, and METALOB, which is trained with the predictions made by the individual models to learn the most appropriate aggregation function.A detailed description of these ensemble methods can be found in Section 2.1 in SUP.

LOBCAST Framework for SPTP
We present LOBCAST4 [25], a Python-based framework developed for stock market trend forecasting using LOB data.
LOBCAST is an open-source framework that enables users to test DL models for the SPTP task.The framework provides data pre-processing functionalities, which include normalization, splitting, and labelling.LOBCAST also offers a comprehensive training environment for DL models implemented in PyTorch Lightning [38].It integrates interfaces with the popular hyperparameter tuning framework WANDB [44], which allows users to tune and optimize model performance efficiently.The framework generates detailed reports for the trained models, including performance metrics regarding the learning task (F1, Accuracy, Recall, etc.).LOBCAST supports backtesting for profit analysis, utilizing the Backtesting.py[45] external library.This feature enables users to assess the profitability of their models in simulated trading scenarios.We plan to add new features such as (i) training and testing with different LOB representations [46,47], and (ii) test on adversarial perturbations to evaluate the representations' robustness [48].We believe that LOBCAST, along with the advancements in DL models and the utilization of LOB data, has the potential to improve the state of the art on trend forecasting in the financial domain.

Performance, Robustness and Generalizability
To test robustness and generalizability, we conducted our experiments for each model using five different seeds to ensure reliable results and mitigate the impact of random initialization of network weights and training dataset shuffling.The training process involved training the 15 models for each seed on each of the considered prediction horizons (K = {1, 2, 3, 5, 10}).More details on the setting of the experiments are provided in the SUP Section 5. On average over all 5 runs, the training process for all the models took approximately 155 hours for FI-2010 and 258 hours for LOB-2021/2022, utilizing a cluster comprised of 8 GPUs (1 NVIDIA GeForce RTX 2060, 2 NVIDIA GeForce RTX 3070, and 5 NVIDIA Quadro RTX 6000).
In Table 2, we summarize the results of our experiments.As the datasets are not well balanced, we focused on F1-score; other performance metrics are reported in the SUP.The Table compares the claimed performance of each system (column F1 Claim) with those measured in the robustness (FI-2010) and generalizability (LOB-2021 and 2022) experiments.For each dataset, we show the average performance and the standard deviation achieved by each model in all the horizons, along with its rank.
To evaluate the robustness and the generalizability of the models, we compute the robustness and the generalizability scores, a value ≤ 100 that is computed as 100 − (|A| + S), where A and S are defined as follows.A is the average difference between the F1-score reported in the original paper and the one that we observed in our experiments on FI-2010 for robustness, and on LOB-2021 and LOB-2022 for generalizability.S is the standard deviation of these differences.The score penalizes models that demonstrate higher variability in their performance by subtracting the standard deviation.The average and standard deviation were computed over the declared horizons for each model and considering all five seeds.1.Except for a few systems, there is a considerable difference between the claimed performances and those measured in both robustness and generalizability experiments.Note that while the performance gap is negative on average and considerably negative in the scenario of LOB-2021 and 2022, a few systems outperform the claimed results, as highlighted by the arrows in Table 2.
2. All models are very sensitive to hyperparameters, in fact, they diverged (F1-score ⩽ 33%) during the hyperparameters search for about half of the runs.
3. The ranking of systems changes considerably if we compare the declared performances with those measured in our experiments.On the other hand, the best six systems in FI-2010 remain the same in LOB-2021 and 2022.Based on these experiments (summarized in Table 2), the BINCTABL model demonstrates the highest F1-score when averaged over the seeds and prediction horizons, achieving an average of 82.6% ± 7.0.Notably, the BINCTABL model also exhibits the strongest robustness score of 99.7, ranking as the best in terms of robustness.For a more comprehensive analysis, Figure 2 provides the confusion matrices of the BINCTABL model's predictions for two horizons (k = 1 and k = 10).
The confusion matrices demonstrate that the model is slightly biased toward the stationary class.This pattern is consistent across all the models, especially for the first three horizons, reflecting the imbalance of the dataset towards the stationary class, as specified in Section 4.1.
Remarkably, a significant number of models in our study failed to achieve the claimed performance levels.Two possible reasons are the lack of the original code and the missing hyperparameters declaration.Among the models, TRANSLOB and ATNBOF exhibit the largest discrepancies, ranking as the second and first worst performers, respectively.Notably, ATNBoF performs the poorest among all models, both in terms of robustness score and F1-score.We observed that CNN1, CNN2, CNNLSTM, TLONBOF, and DLA are the most sensitive models in terms of network weight initialization and dataset shuffling, in fact, these models exhibit a standard deviation over the runs that exceeds 5 points, indicating a high degree of variability in their performance.
Finally, we highlight that none of the top three models in our study utilize h = 100 long market observations as input, despite it being a common practice in the literature [26-28, 32, 36], meaning that they are able to achieve good results without relying on a large historical context.This suggests that the most influential and relevant dynamics impacting their predictions tend to occur within a short time frame.In Section 6 in SUP, we analyze in more detail the robustness results of our benchmark study when varying the horizons.
Generalizability on LOB-2021/2022 When comparing the performance of models on the FI-2010 and LOB-2021/2022 datasets, we observe that models showing high performance on the FI-2010 dataset demonstrate a deterioration in performance.Conversely, some of the models that performed poorly on the FI-2010 dataset show an improvement in performance on the LOB-2021/2022 datasets.However, the overall performance of all models on the LOB-2021/2022 dataset is still significantly lower than on the FI-2010 dataset, ranging 48-61% in F1-score.Furthermore, we conjecture that the overall performance is worse in LOB-2022 than in LOB-2021 due to the higher stocks' volatility.We mention two potential factors contributing to this observed phenomenon.Firstly, the LOB-2021/2022 datasets present a higher level of complexity than the FI-2010 dataset, despite having been generated with a similar approach.Indeed, NASDAQ is a more efficient and liquid market than the Finnish one, as evidenced by the fact that LOB-2021/2022 datasets have approximately three times the size of FI-2010 in terms of events for the same period length.Secondly, the bestperforming models may overfit the FI-2010 dataset, leading to a decrease in their performance when applied to LOB-2021/2022 datasets.In particular, BINCTABL experiences an average decrease of approximately 19.6% in F1-Score across all horizons, resulting in a generalizability score of 73.5%.For a more detailed analysis of our generalizability results, we refer to Section 6 in SUP, where we also illustrate the substantial performance variation across different stocks.Among the tested models, CSCO stands out as yielding the highest performance.This may be attributed to the high stationarity of CSCO (balance 18-65-17% in the train set), indicating more stable and predictable behaviour.This hypothesis is supported by the confusion matrices, which consistently show the best performance in the stationary class across all models; for reasons of space, we reported only those of BINCTABL in Figure 2 while for the complete study, we refer the reader to Section 7 in SUP.Finally, as a final benchmark test, we conducted a trading simulation using LOB-2021.The results confirm the challenging nature of the task using the up-to-date LOB-2021 dataset, indicating that the models' profitability is far from guaranteed.For more detailed information about the simulation, please refer to Section 7 in SUP.

Discussion and Conclusions
Our findings highlight that price trend predictors based on DNNs using LOB data are not consistently reliable as they often exhibit non-robust and non-generalizable performance.Our experiments demonstrate that the existing models are susceptible to hyperparameter selection, randomization, and experimental context (stocks, volatility).In addition, the selection of datasets and the experimental setup fail to capture the intricacies of the real-world scenario.This lack of generalizability makes them inadequate for practical applications in real-world settings.
Models Our results lead to a crucial observation: on the LOBSTER dataset, SOTA DL models for LOB data exhibit low generalizability.We suggest that this phenomenon is due to two factors: the higher complexity of the LOBSTER dataset compared to the FI-2010 dataset, and the overfitting of the best-performing models to the FI-2010 dataset, which lowers their performance on the LOB-STER dataset.Another key finding of this study is that the top models with the highest performance on both datasets employ attention mechanisms.This suggests that the attention technique enhances the extraction of informative features and the discovery of patterns in LOB data.However, in general, it appears that current models cannot cope with the complexity of financial forecasting with LOB data.Future investigations should consider state-of-the-art approaches to multivariate time series forecasting, such as [49][50][51], which have not yet been adopted in the financial sector.
Dataset Financial movements can be influenced by geopolitical events, as political actions and decisions can significantly impact economic conditions, market sentiment, and investor confidence [1].These factors are not captured by LOB data alone.For this reason, we believe that price predictors may benefit from integrating LOB data with additional information, for example, sentiment analysis relying on social media and press data, representing an easily accessible source of exogenous factors impacting the market [52].This is particularly true for mid-and long-term price trend prediction, whereas it might not hold for HFT strategies [2].We remark that micro and macroscopic market trends are fundamentally different, and the microscopic behaviour of the market is very much driven by HFT algorithms, making it almost exclusively dependent on financial movements rather than external factors.In this scenario, granular and raw LOBs may suffice to provide data for price trend prediction.Another weakness in dataset generation is the potential for training, validation, and test splits to have dissimilar distributions.This occurs due to the distinct characteristics of the historical periods covered by the stock time series.This can negatively affect the model's ability to generalize effectively and make reliable predictions on unseen data.
Labelling As we discussed in Sections 2, 4.1 and 4.4, the choice of the threshold for class definition in Equation 2 plays a crucial role in determining the trend associated to a market observation.We believe that current solutions present room for improvement.As discussed in Section 4.1, in FI-2010, the parameter θ was chosen to obtain a balanced dataset in the number of classes for the horizon k = 5 (which is the mean value of the considered interval in the set K). Thus, θ is not chosen in accordance with its financial implication but rather serves the purpose of balancing the dataset.We recall that the dataset is made of different stocks.With such a labelling system, fixed θ, stocks with low returns become associated with stable trends, as their behaviour is overshadowed by stocks exhibiting higher returns.Good practices that could be investigated are to use a weighted look-behind moving average to absorb data noise instead of mid-prices as in Equation 2or to define a dynamically adapting θ which accounts for changing trends of a stock's mid-price.Moreover, the labelling approach of Equation 2, used by all surveyed models, fails to leverage important aspects available in LOB data, including the volume, which directly influences stock volatility.Therefore, another possible improvement is the definition and use of other insightful features that can be extrapolated from a LOB in addition to the mid-price.Such values could encapsulate other peculiar and informative features, such as stocks' spread and volumes.
Profit In the context of stock prediction tasks, it is of utmost importance to go beyond standard statistical performance metrics such as accuracy and F1 score and incorporate trading simulations to assess the practical value of algorithms.SPTP predictors' ultimate measure of success lies in their ability to generate profits under real market conditions.It is essential to conduct trading simulations using real simulators that go beyond testing on historical data.Recent progress has been made in the context of reactive simulators [53][54][55][56].
We acknowledge that our study is subject to some limitations, which should be considered when interpreting our findings.First, we conducted a grid hyperparameter search for the models which did not specify them.Since hyperparameter search is not exhaustive, our chosen best hyperparameters could potentially undermine the quality of the original systems.Secondly, due to computational resource limitations, we could not train the benchmarked models on LOB datasets spanning longer periods, e.g., years rather than weeks.We recognize that doing so could have improved our generalizability results.

Disclaimer
This paper was prepared for informational purposes in part by the Artificial Intelligence Research group of JPMorgan Chase & Co. and its affiliates ("JP Morgan"), and is not a product of the Research Department of JP Morgan.JP Morgan makes no representation and warranty whatsoever and disclaims all liability, for the completeness, accuracy or reliability of the information contained herein.This document is not intended as investment research or investment advice, or a recommendation, offer or solicitation for the purchase or sale of any security, financial instrument, financial product or service, or to be used in any way for evaluating the merits of participating in any transaction, and shall not constitute a solicitation under any jurisdiction or to any person, if such solicitation under such jurisdiction or to such person would be unlawful.

A Market Observation
We represent the evolution of a Limit Order Book (LOB) as a time series L, where each L(t) ∈ R 4L is called a LOB record, for t = 1, . . ., N , being N the number of LOB observations and L the number of levels.In particular, L(t) = {P s (t), V s (t)} s∈{ask,bid} , where P ask (t), P bid (t) ∈ R L represent the prices of levels 1 to L of the LOB, on the ask (s = ask) side and bid (s = bid) side, respectively, at time t.Analogously, V ask (t), V bid (t) ∈ R L represent the volumes.This means that for each t and every j ∈ {1, . . ., L} on the ask side, V ask j (t) shares can be sold at price P ask j (t).Given the time series of a LOB L and a temporal window T = [t − h, t], h ∈ N, we can extract market observations on T , M(T ), by considering the sub-sequence of LOB observations starting from time t − h up to t.

B Models
In our experiments, we consider 13 State-Of-the-Art (SOTA) models based on DL for the SPTP task.These models were published in papers between 2017 and 2022.Two additional baselines, namely Multilayer Perceptron (MLP) and Long-Short Term Memory (LSTM) were also included in our analysis, in addition to two ensemble methods described in Section B.1.All models are trained, validated and tested with LOB data.In the remainder of this section, we briefly describe each selected model.Tsantekis et al. [27] (2017) use a LSTM to predict price directions considering moving averages of the mid-price over the past and the future k steps.In the same year, the same authors proposed in [28] a model based on a Convolutional Neural Network (CNN) (CNN1) for future mid-price movement predictions from large-scale high-frequency limit order data.The proposed architecture is composed of a series of convolutional and pooling layers followed by a set of fully connected layers that are used to classify the input.The parameters of the model are learnt by minimizing the categorical cross-entropy.In [31] (2020), the same research group proposed two new architectures.The first one (CNN2) uses a series of convolutional layers for capturing the temporal dynamics of time series extracted from a LOB and for correlating temporally distant features.In the last convolutional layer, CNN2 retains the temporal ordering by flattening only the dimensions of the convolution.The authors then propose an architecture that merges the described CNN with an LSTM, that we call CNNLSTM.Initially, the CNN is used for feature extraction for the LOB time series.It produces a new time series of features with the same length as the original one, which is then passed to the LSTM module for classification.
Tran et al. [29] in 2018 introduced a new Neural Network (NN) architecture for multi-variate time series that incorporates an attention mechanism in the temporal mode.The authors call this architecture Temporal Attention-Augmented Bilinear (TABL), as it applies a bilinear transformation to the input, which consists of a set of samples at different time stamps.The Bilinear Layer (BL) is able to detect feature and time dependencies within the same sample and is augmented with a temporal attention mechanism to capture interactions between different time instances.The authors define three different network configurations, called A(TABL), B(TABL), and C(TABL), with 0, 1, and 2 hidden layers, respectively.In our experiments, we consider C(TABL), which outperforms the others.In [6] (2021), the same authors extended the solutions implemented in [29] by integrating a data-driven normalization strategy that takes into account statistics from both temporal and feature dimensions to tackle potential problems posed by non-stationarity and multimodalities of the input series.The new model is called BINCTABL.
Passalis (2019) et al. [30] introduce the DAIN (Deep Adaptive Input Normalization) three-step layer that adaptively normalizes data depending on the task at hand, instead of using some fixed statistics calculated beforehand as in traditional normalization approaches.DAIN works as follows: in the first layer, called the adaptive shifting layer, the mean of the current time series is scaled by the weight matrix of the first neural layer.The resulting vector is passed to the adaptive scaling layer, which first computes the standard deviation of the original feature vector with respect to the shifted one, and then scales this result using the weight matrix of the scaling layer.The last layer, called the adaptive gating layer, is meant to suppress features that are not relevant by applying a sigmoid function in order to neglect features with excessive variance, which could hinder network generalization.The authors integrate DAIN in three different architectures, a MLP proposed in [57], a CNN as in [28] and RNN [58].In our experiments, we consider the architecture with the highest performance, namely the MLP.
Zhang et al. [26] (2019) propose DEEPLOB.The authors propose a smooth data labelling approach based on mid-prices to limit noise and discard small oscillations.They propose a 3-block architecture composed of standard convolutional layers, an Inception Module, and a LSTM layer.The first two elements are used for feature extraction, whereas the LSTM layer captures time dependencies among the extracted features.
Wallbridge et al. [32] (2020) introduce TransLOB, a new DL architecture for mid-price trend prediction, composed of two main components: a convolutional module made up of five dilated causal convolutional layers and a transformer module, composed by two transformer encoder layers, each made up of a combination of multi-head self-attention, residual connections, normalization, and feedforward layers.Between the convolutions and the transformer module, the tensor is passed to a normalization layer and concatenated to a positional encoding.Passalis et al. [33] (2020) propose a model for high-frequency limit order book data based on Temporal Logistic Neural Bag-of-Features formulation (TLoNBoF).Given a collection of time series, TLoNBoF extracts features with a 1-D convolutional layer to capture the temporal relationships between succeeding feature vectors.Then the features are transformed into vectors of constant length, i.e., their length must be invariant to the length of the input time series.To cope with this, the authors define a Temporal Logistic Neural Bag-of-Features formulation to aggregate the extracted feature vectors.A fine-grained temporal segmentation scheme is also proposed to capture the temporal dynamics of the time series.To this end, the transformed feature vectors are segmented into three temporal regions to capture the short-term, mid-term, and long-term behaviour of the time series.
In 2021, Zhang et al. [34] adopt Sequence-to-Sequence (Seq2Seq) [59,58] and Attention [60] to recursively generate multi-horizon forecasts and build a predictor called DEEPLOBATT.A typical Seq2Seq model consists of an encoder that analyses the input time steps to extract meaningful features.Then, only the last hidden state from the encoder is used to make estimations, which penalizes the processing of long sequence input.To overcome this limitation, the Attention module accesses hidden states of the encoder and assigns a proper weight to each hidden state.Each input contains the most recent 50 updates, and each update includes information for both the ask and bid of a LOB.Therefore, a single input has the dimension (50,40), and each output consists of a multi-horizon prediction of all 5 points of the FI-2010 dataset.As an encoder, they adapt a previous model, namely DeepLob [26], to extract representative features from raw LOB data while they experiment with both Seq2Seq and Attention models for the decoder.
Guo et al. [35] (2022) propose a novel architecture for price trend prediction named Deep Learning Architecture (DLA).Firstly, the dataset is preprocessed and aggregated at different time windows.Once extracted, the features are given as input to the three-phase proposed architecture.The first phase uses Temporal Attention to adaptively assign attention weights to each moment of the sliding window.The processed data is passed to a stacked Gated Recurrent Unit (GRU) architecture to obtain an accurate representation of the analysed trends, which is complex and nonlinear.The GRU architecture consists of two hidden GRU layers to generate as output the hidden state at each time period.This is given to the second temporal attention stage, which is used to generate more accurate attention weights.The proposed solution is compared to several other models in the literature, including C(TABL) [29], DeepLOB [26] and TLo-NBoF [33].The proposed solution achieves very high performance on the FI-2010 dataset outperforms the other models.The authors analyse the performance of their model by varying several parameters, including label thresholds and the choice of the time step.
Tran et al. [36] extend the solution proposed in [61], which introduces a neural bag-of-features (N-BoF)-based method for building a codeword that is eventually fed to a classifier.In [36], the neural bag-of-feature model was enhanced by incorporating a 2D-Attention (2DA) module that highlights important elements in the matrix data while discarding irrelevant ones by zeroing them out.The 2D-Attention function performs a linear interpolation between the input data matrix and input data matrix filtered by an attention mask matrix that encodes the importance of the columns of the original input.The proposed 2DA block can be applied to the features to highlight or discard the outputs of certain quantization neurons, whose results are considered equally important in the NBoF model for every input sequence (Codeword Attention).The resulting model is called ATNBoF.The 2DA function can also be applied to lend weight to salient temporal information, which is otherwise aggregated and equally contributing to the quantized features in the NBoF model (Temporal Attention).Kiesel et al. [37] (2022) propose Axial-LOB, a model based on axial attention for price trend prediction.Unlike the naive attention mechanism, axial attention factorizes 2D attention into two 1D attention modules, one along the width (feature) axis, and a second one along the height (time) dimension.Raw values of the LOB are preprocessed and passed to the axial attention block: Each layer of the attention block is preceded and followed by a module composed of 1 × 1 convolutions, batch normalization, and ReLu activation to adjust the number of channels in the intermediate layers of the network.For training the axial attention module, the authors use mini-batch Stochastic Gradient Descent (SGD) by minimizing the cross-entropy loss between the predicted class probabilities and the ground truth label.The authors compare the performance of the proposed model against the solutions adopted in [28,29,26] in terms of precision, recall, and F1 on the FI-2010 dataset.Axial-LOB proves to have improved performance with respect to these works while being simpler in terms of the number of parameters.

B.1 Ensamble Methods
To explore the possibility of achieving new SOTA performance by combining the predictions of all 15 models, we have implemented two ensemble methods: MAJORITY and METALOB.
The MAJORITY ensemble assigns the class label that appears most frequently among the predictions of the classifiers.To account for variations in the performance of individual classifiers, we incorporate a weighting scheme based on their F1 scores.This ensures that predictions from higherperforming models carry more influence in the final decision.
The METALOB meta-classifier is implemented as a multilayer perceptron (MLP) with two fullyconnected layers.It is designed to learn how to effectively combine the outputs of the 15 DL models, which serve as the base classifiers to produce the final output.The input to the meta-classifier is a 1D tensor with a probability distribution over the trends (up, stationary, down), for each of the models, resulting in a tensor of 3 • 15 elements.The test set of LOB-2021/2022 is divided into three distinct subsets.We allocated 70% of the data for training, 15% for validating, and the remaining 15% for testing the meta-classifier.
By implementing these ensemble methods, our objective was to leverage the collective intelligence of ensemble models and potentially achieve performance that surpasses that of individual models.Unfortunately, the ensemble models did not achieve the expected level of performance, as they failed to surpass the performance of the best individual models, as discussed in the main paper.

C Stock Selection
For our generalizability study, in order to create a variegated evaluation scenario, we curated a pool of 630 stocks from NASDAQ exchange with a market capitalization that ranged from ∼ 2 Billion to ∼ 3 Trillion dollars.Data was gathered from NASDAQ Stock Screener [62].From the pool of stocks we generated 6 clusters with t-distributed Stochastic Neighbor Embedding (t-SNE) to capture stocks differences in the years 2021-2023.We used the following features: daily return, hourly return, volatility, outstanding shares, P/E ratio, and market capitalization.The P/E ratio indicates the ratio between the price of a stock (P) and the company's annual earnings per share (E).The analysis led to the identification of 6 stocks that are nearest to the cluster centroids of the generated 3-dimensional latent space.The stocks are the following: SoFi Technologies (SOFI), Netflix (NFLX), Cisco Systems (CSCO), Wing Stop (WING), Shoals Technologies Group (SHLS), and Landstar System (LSTR), making up the set that we denote by S = {SOFI, NFLX, CSCO, WING, SHLS, LSTR}.Table 3 captures the main features of these stocks for the period of July 2021.The selected stocks have very variable average daily returns, the minimum being SHLS and the maximum being NFLX.Daily and Hourly returns highlight that some stocks are more volatile than others.The market capitalization represents the total value of the outstanding common shares owned by stockholders.Stocks show different class balancing in the training set.CSCO is the stock with a major unbalance toward the stable class, whereas NFLX and LSTR are more unbalanced towards up and down classes, respectively.In Section 6, as well as in the main paper, we analyze the reasons behind the occurrence of class imbalance specific to individual stocks and discuss its impact.

D Datasets FI-2010
The most widely spread public LOB dataset is FI-2010, which was proposed in 2017 by Ntakaris et al. [12] with the objective of evaluating the performance of machine learning models on the SPTP task.The dataset consists of LOB data from five Finnish companies: Kesko Oyj, Outokumpu Oyj, Sampo, Rautaruukki, and Wärtsilä Oyj (KESKOB, OUT1V, SAMPO, RTRKS, WRT1V, respectively) of the NASDAQ Nordic stock market.Data spans the time period between June 1st to June 14th, 2010, corresponding to 10 trading days (trading happens only on business days).About 4 million limit order messages are stored for ten levels of the LOB.The dataset has an event-based granularity, meaning that the time series records are not uniformly spaced in time.LOB observations are sampled at intervals of 10 events (i.e. the submission of an order that causes a LOB update), resulting in a total of 394,337 events.
In FI-2010, the first 7 out of the 10 trading days are dedicated to the training set, while the remaining 3 days constitute the test set.We also extracted a validation set from the training set as the last 20% of the samples to perform hyper-parameter tuning, as in [26].Moreover, as FI-2010 is already normalized, we selected the dataset with z-score normalization for our experiments.Finally, the dataset is already provided with the labels for each horizon k ∈ K = {1, 2, 3, 5, 10} by leveraging the trend definitions described in Equation 2 in the main paper.Such a labelling scheme is very sensitive to the threshold θ regarding the resulting balancing between "upward", "downward" and "stable" trends.The authors of the dataset employed a single threshold θ = 0.002 for all horizons, but it balances only the case of k = 5.Varying the horizon k ∈ K, the class imbalance occurs as shown in Table 4. Class imbalance is not addressed to guarantee a fair robustness evaluation since the considered works do not claim to have done so.

LOB-2021 and LOB-2022
To study the generalizability of the 15 models, we extracted the market observations (see Section C) from the LOBSTER dataset in two time periods: July 2021 (2021-07-01 to 2021-07-15, ten trading days) (see Table 3) making up LOB-2021, and February 2022 (2022-02-01 to 2022-02-15, ten trading days) making up LOB-2022.These two periods have shown to be different in terms of volatility, as the impact of the war in Ukraine has made the market more volatile and unstable.The mid-price trends for these two periods and for the selected stocks are depicted in Figure 4. We build the two datasets associated with these two time periods resembling the structure of the FI-2010 dataset, described in the previous section and proposed in [12].To generate the LOB-2021/2022 datasets, we utilize the LOBSTER data, which consists of LOB records (i.e., L(t) vectors) resulting from events caused by traders at the exchange.LOBSTER associates these records with the specific events that caused changes in the LOB.We isolated the following types of events: order submissions, deletions, and executions, which account for almost all the events in the markets.
For each stock in the set S we construct a stock time series of LOB records L s (t) ∈ R 4L , with L = 10, s ∈ S, N s being the amount of records of the stock s in the considered temporal interval (e.g., (2021-07-01, 2021-07-15) for LOB-2021), t ∈ [1, N s ].We recall that the 4 • 10 features represent the prices and volumes on the buy and sell sides for the ten levels of the LOB.We highlight that the time series L s are non-uniform in time since LOB events can occur at irregular intervals driven by traders' actions.We do not impose temporal uniformization.Instead, we sample the market observation every ten events, as for FI-2010.Furthermore, we do not account for liquidity beyond the 10th order level in the LOB.This approximation is necessary to ensure computational tractability while retaining the most influential levels.It is a commonly employed technique in stock market prediction models, also employed in FI-2010.
Each stock time series L s is split into training, validation, and testing sets using a 6-2-2 days split.Normalization is performed on stock time series using a z-score approach, separately normalizing the prices and volumes.The mean and standard deviation are calculated from the union of the training and validation splits for all stock time series.These statistics are then used to normalize the entire dataset, including the test splits.The final dataset is constructed by vertical stacking (i.e., concatenating along the rows) the six training splits (i.e., one for each stock), six validation splits, and six test splits in this order.
The dataset is used to extract market observations with a sliding window approach, as explained in Section 3 of the main paper.Labelling market observations is accomplished by leveraging the trend definitions described in Equation 2 of the main paper, mapping market observations to the corresponding trend based on a predefined prediction horizon k ∈ K.It is important to note that for each prediction horizon K, a new dataset is generated.Consequently, LOB-2021 and LOB-2022 consist of five (i.e., |K|) distinct datasets, each corresponding to one of the five prediction horizons.

E Hyperparameters Search
For evaluating the robustness of the surveyed models, we used the hyperparameters reported in the original papers whenever they were available.However, we encountered cases where hyperparameters were not declared at all, such as in LSTM [27] and CNN1 [28], while in other cases, including CNNLSTM [31], AXIALLOB [37], ATNBOF [36] and DAIN [30] only partial information was provided.To address these gaps, we performed a grid search exploring different values for the batch size, including {16, 32, 64, 128, 256} and the learning rate, including {0.01, 0.001, 0.0001, 0.00001}.
Regarding the generalizability experiment, we found that the majority of models using the hyperparameters from the robustness analysis performed poorly on the LOB-2021/2022 datasets.We conducted a comprehensive hyperparameter search on horizon k = 5 (which is the most balanced) using a grid search approach for all 16 models.For this search, we maintained the same number of epochs and optimizer used in the robustness analysis, while searching for batch size and learning rate using the same domains mentioned above.For a complete overview of the hyperparameters utilized in our experiments, refer to Table 5.

F Additional Experimental Results
Robustness Figure 5 depicts the F1 score, accuracy, precision, and recall of the surveyed models obtained through our framework called LOBCAST, for the time horizons K = {1, 2, 3, 5, 10}.Most of the models show similar behaviour with respect to the prediction horizons.In particular, regarding the F1-score, the worst performance is obtained for k = 2, after which there is an increasing trend as the prediction horizon increases.This might sound counterintuitive, as it consists of forecasting the  price trend in a more distant future.However, for very short horizons, the labelling system adopted may be susceptible to noise affecting the model's capability to extract relevant patterns.
Figure 6 shows a bar chart representing the F1-score of the 17 models reproduced using the LOB-CAST framework for the five prediction horizons K.The plot shows black empty bars representing the declared performance in the corresponding paper, when applicable.In the figure, the number on the bar represents the obtained performance in LOBCAST on the FI-2010 dataset, and the value in brackets indicates the difference between the obtained performance and the originally declared performance in the respective paper.We highlight that not all papers declare their performance for all the horizons.The figure clearly highlights how robust the considered models are.Surprisingly, for CNN2 and CNNLSTM, our experiments achieved noticeably higher performance than the one declared in the original paper.We also observe the trend where the BINCTABL model consistently emerges as one of the top-performing models across all the horizons.Moreover, it notably closely aligns with the performance reported in the paper presenting it.
The largest discrepancy is observed for TRANSLOB and ATNBoF (or TNBoF-TA), whose average performances differ by 28% from the original results.On average, ATNBoF achieves an F1-score of only 40.9%.This substantial deviation from the claimed performance highlights the challenges and limitations associated with this particular model.
Figure 7 shows the agreement matrix of the models for the horizon k = 5.As expected, the highest agreement (≈80%) is among the best-performing models, namely BINCTABL, AXIALLOB, DEEPLOB, CTABL, DLA and DEEPLOBATT.The model that exhibits less correlation with the other models is MLP.  to CTABL, enabling joint normalization of the input time series along both temporal and feature dimensions.This enhancement yields a remarkable improvement, with an average increase of 9.2% in the F1-score compared to the second-best model (DLA).Interestingly BINCTABL is composed only of 11.446 parameters, which makes it very fast at inference time (0.0005s).
Generalizability In this section, we provide additional results on the generalizability of the models.We evaluate the performance of the models on two different datasets: LOB-2021 and LOB-2022.The evaluation metrics used include F1-score, accuracy, precision, and recall, which are displayed in Figure 8 for LOB-2021.The plot for LOB-2022 is omitted since they show similar properties.
We observe that most models exhibit a similar trend in both LOB-2021 and LOB-2022 datasets.However, the performance curves in these generalizability tests differ from the results obtained on the FI-2010 dataset, shown in Figure 5. Specifically, for the LOB-2021/2022 datasets, the F1-score of most models shows an increasing trend as the prediction horizon increases up to k = 3, after which it starts to decrease.
To ease readability, in Table 6 we report the F1-score of all the models, horizons and periods.
The performance of the models, as reported by the authors of the selected paper, exhibits changes when evaluated on the LOB-2021 and LOB-2022 datasets.These changes show varying degrees of generalizability among the models.
Notably, the ATNBoF model demonstrates the most substantial improvement with respect to the declared performances, showing an average increase of 12.2% across all prediction horizons.A similar improvement is exhibited by MLP and TLONBoF.Despite this improvement, ATNBoF still exhibits the lowest overall performance with an average score of 53.1%.It is worth mentioning that ATNBoF is the most sensitive to random initialization.
In contrast, the other models experience a significant decline in performance when evaluated on LOB-2021 and LOB-2022 datasets.For example, the previously best-performing model on the FI-2010 dataset, BINCTABL, shows an average decrease in F1-score of approximately 19.6% across all prediction horizons.This decline results in a generalizability score of 73.5% (as mentioned in Table 2 of the main paper).However, despite this decline, BINCTABL remains the top-performing model when evaluated on the LOB-2021 dataset on almost all the prediction horizons.On these datasets it exhibits similar performance to DEPPLOB and DEEPLOBATT models.
Figure 9 shows the agreement matrix on LOB-2022.Considering the more flattened performances of the models on the LOB-2021/2022 dataset compared to the FI-2010 dataset, the agreement percentages among the models are consistently high, and no distinct patterns are observed.Unlike FI-2010, where METALOB predicted as BINCTABL 82.8% of the time, on LOB-2022 (and also LOB-2021) METALOB showed no preference for any model, resulting in a balanced agreement rate (≈ 33%) among all models.We decided not to include the agreement matrix of LOB-2021 because it was similar to LOB-2022.
In Figure 10, we present the results of our tests for the time horizon k = 5 on each individual stock from LOB-2021 dataset.Among the tested stocks, CSCO stands out as yielding the highest Labelling The experiments shown above highlight that the models' performance does not exhibit a clear trend with respect to the prediction horizon.The labelling method is probably the cause of this phenomenon, in fact, classifying trends based on the mid-price tends to embody noise on the nearer horizons.This hypothesis is supported by the work of Zhang et al. [26], specifically, they generated a dataset using an alternative labelling method that relies on the mean of the previous and next k mid-prices to identify trends.Interestingly they observed an inverse trend in performance with respect to the horizons; in fact, the best performances were achieved with the shortest horizon and deteriorated as it increased.While exploring various labelling techniques is beyond the scope of this benchmark, we provide an initial investigation in this direction.Specifically, focusing on k = 5 in LOB-21, we select two stocks, NFLX and SOFI.
Based on Equation 1 and 2 of the original paper, we can define θ N and θ S as the thresholds that balance the occurrences of the classes for the stocks NFLX and SOFI, respectively.Similarly, we can define θ 0 as the threshold that balances the occurrences of the classes for the ensemble of six stocks within the dataset.NFLX (θ N ) (SOFI (θ S )) represents the training of the models over NFLX (SOFI) stock using the threshold θ N .
In the case of SOFI, all methods, except for BINCTABL, achieve the highest performance in the ALL (θ 0 ) setting.This indicates that these models are able to extract useful signals from other stocks, reducing overfitting and improving overall performance.On the other hand, comparing the SOFI (θ 0 ) and SOFI (θ S ) settings does not provide significant insights.This suggests that the balancing of the three classes is not crucial for achieving higher performance.This is even more the case for NFLX in Figure 11a, considering that the imbalance due to θ 0 is much higher (see Table 3).
These results indicate that the labelling mechanism should be revised from its current definition and be agnostic with respect to the balancing involved.Trends definitions should not solely depend on the magnitude of the future price shift relative to the current price.Other factors, such as persistence over time and volume considerations, should also be taken into account.A more comprehensive discussion of the limitations and challenges associated with the labelling mechanisms can be found in the main paper, particularly in the final discussion and conclusions section.

G Profit Analysis
As a final benchmark test, we conducted a trading simulation using our framework, relying on Backtesting.pyPython library 5 .As highlighted by [24], most of the existing literature in the SPTP field neglects backtesting, even though it is essential for evaluating the performance of algorithmic trading strategies and for potential real-world use.
We performed backtesting using the same period as the test set of the LOB-2021 dataset, i.e., from 2021-07-13 to 2021-07-15.To perform backtesting, we generated an Open High Low Close (OHLC) time series with a 10 events period.The OHLC is an aggregation technique to summarize periods of a time series, e.g., minutes, hours, days, or a number of events (10 in this case).Each data point of the series represents four aggregates of the considered period.The Open represents the first price of the period; High is the highest price of the period; Low is the lowest price of the period; Close is the last price of the period.
We base our trading simulation on the methodology of the seminal paper [26] in this field, in which the authors conducted a similar experiment.We established certain parameters for our simulation.Firstly, we set the number of shares per trade to a fixed value of 1, simplifying our analysis and assuming a negligible market impact.Furthermore, our simulated trader begins with an initial capital of $10.000, and we make the assumption of no transaction fees.
The trading strategy relies on the models and operates by generating signals every 10 events to predict subsequent price movements.These signals, categorized as up, stationary, or down, determine the trading action.When the signal is up, the simulated trader places a buy order.Conversely, if the signal is down and the trader currently holds a long position, he places a sell order.In cases where the signal is stationary, the trader takes no action.The orders are filled at the next open price.
The results of the trading simulation for each stock are presented in Figure 12.The strongest correlation observed is between the daily returns of the stocks, as shown in Table 3, and the returns of the strategy described above.In fact, the two stocks with the highest positive daily returns (namely LSTR and NFLX) are the only ones for which the strategy is profitable.On the other hand, the two stocks with the highest negative daily returns (SOFI and SHLS) are the ones for which most models show a negative return.Another correlation, albeit less strong, is between the volatility of the stocks and the return of the models.Specifically, lower volatility is associated with higher model returns.
We recognize the limitations of this simulation.For instance, we do not perform portfolio optimization or position sizing, we assume the trades execution at the mid-price, and we ignore transaction costs, but a realistic and sophisticated algorithmic trading simulation is beyond the scope of this study and remains an interesting aspect for future research.

Figure 3
represents an observation M(T ) ∈ R h×4L .The market observation over the window [t − h, t] is associated with the label computed at time t through Equations 1 and 2 in the main paper.A Stock Price Trend Prediction (SPTP) Deep Learning (DL) model takes as an input a market observation and outputs a probability distribution over the trend classes U, D, and S.

Figure 3 :
Figure 3: An example of market observation.

Figure 4 :
Figure 4: Selected stocks mid-price normalized by the midprice of the first day.

Figure 5 :
Figure 5: Evaluation metrics on different horizons K on FI-2010 dataset.

Figure 11
Figure 11  shows the results of three different training settings: (i.) ALL (θ 0 ) represents the training of the models over the ensemble of all the six stocks using the threshold θ 0 ; (ii.) NFLX (θ 0 ) (SOFI (θ 0 )) represents the training of the models over NFLX (SOFI) stock using the threshold θ 0 .(iii.)
N S L O B T L O N B o F B IN C T A B L D E E P L O B A T T D L A A T N B o F A X IA L L O B F1 Score (%)

Figure 11 :
Figure 11: Different labelling strategies on NFLX and SOFI stocks for k = 5.

Table 2 :
Robustness, generalizability, and performance scores of the models.Arrows indicate whether the measured F1 of a system is higher or lower than stated in the original paper.Colour saturation highlights systems with best (green) and worst (red) robustness and generalizability scores.
Table 2 clearly highlights the following:

Table 5 :
Hyperparameters adopted in our experiments.

Table 6 :
F1-score on LOB-2021 and LOB-2022.Columns FI 2010, FI r 2010, LOB 2021, and LOB 2022, respectively represent the claimed performance of the models in the respective papers, the performance reproduced with LOBCAST on FI, LOB-2021, LOB-2022.