Learning State Transition Rules from High-Dimensional Time Series Data with Recurrent Temporal Gaussian-Bernoulli Restricted Boltzmann Machines

Understanding the dynamics of a system is crucial in various scientific and engineering domains. Machine learning techniques have been employed to learn state transition rules from observed time-series data. However, these data often contain sequences of noisy and ambiguous continuous variables, while we typically seek simplified dynamics rules that capture essential variables. In this work, we propose a method to extract a small number of essential hidden variables from high-dimensional time-series data and learn state transition rules between hidden variables. Our approach is based on the Restricted Boltzmann Machine (RBM), which models observable data in the visible layer and latent features in the hidden layer. However, real-world data, such as video and audio, consist of both discrete and continuous variables with temporal relationships. To address this, we introduce the Recurrent Temporal Gaussian-Bernoulli Restricted Boltzmann Machine (RTGB-RBM), which combines the Gaussian-Bernoulli Restricted Boltzmann Machine (GB-RBM) to handle continuous visible variables and the Recurrent Temporal Restricted Boltzmann Machine (RT-RBM) to capture time dependencies among discrete hidden variables. Additionally, we propose a rule-based method to extract essential information as hidden variables and represent state transition rules in an interpretable form. We evaluate our proposed method on the Bouncing Ball, Moving MNIST, and dSprite datasets. Experimental results demonstrate that our approach effectively learns the dynamics of these physical systems by extracting state transition rules between hidden variables. Moreover, our method can predict unobserved future states based on observed state transitions.


Introduction
Learning the dynamics of a system is crucial in scientific and engineering domains. Traditionally, dynamics have been expressed using symbolic forms like equations, programs, and logic, which provide explicit interpretability and generality [1][2][3]. However, the increasing volume of data, noise, ambiguity, and dynamics complexity pose challenges in learning and extracting rules from large-scale data [4][5][6][7][8]. 1 3 To tackle this problem, there have been attempts to combine symbolic methods with deep learning to learn dynamics from large data [9,10]. While some dynamics can be effectively expressed using quantitative relationships, such as classical mechanics and electromagnetism, others are better represented by state transition rules, such as Boolean networks (BNs) [11] and Cellular Automata (CA) [12]. Symbolic methods have shown success in learning these types of dynamics [13][14][15].
Among the proposed methods, the restricted Boltzmann Machine (RBM) has been utilized to learn latent factors that capture the essence of dynamic [16,17]. RBM, a probabilistic neural network, treats observable data as visible variables and latent features as hidden variables. It allows the expression of symbolic representations, such as propositional formulas, learned through maximum likelihood estimation of the RBM energy function [18][19][20]. However, existing RBM-based approaches have not extensively explored the learning of hidden representations from raw data, such as images. Furthermore, the prediction of the dynamics has been made as a black box process within the network, which limits interpretability.
To address these limitations and challenges, this work proposes a method that extracts a small number of essential factors as hidden variables and expresses their relationships as interpretable rules. Specifically, we introduce the recurrent temporal Gaussian-Bernoulli restricted Boltzmann Machine (RTGB-RBM), which combines the which combines Gaussian-Bernoulli restricted Boltzmann Machine (GB-RBM) [21] for handling continuous visible variables and the recurrent temporal restricted Boltzmann Machine (RT-RBM) [22,23] for capturing time dependence between discrete latent variables. RTGB-RBM can effectively learn the dynamics by representing them as state transitions between hidden layers and reconstructing the original dynamics from the transitions in the hidden layers. Additionally, we extract interpretable state transition rules from the trained RTGB-RBM, providing insight into the learned dynamics.
The contributions of this work include the proposed RTGB-RBM, which captures the time dependence of continuous visible and discrete hidden variables, and the method for extracting interpretable state transition rules between hidden variables. Furthermore, our method achieves predictive performance comparable to existing methods for various dynamics and enables interpretable rule learning.
This paper is an extended version of our conference paper [24], where we have expanded the rule extraction and pruning methods, conducted additional experimental analyses using new datasets, and provided an updated discussion of related work. The remainder of this paper is organized as follows: Sect. 2 presents related works on Boltzmann Machines, dynamics learning, and rule learning and compares them with our approach. Section 3 provides an overview of GB-RBM and RT-RBM, including their definitions and algorithms. In Sect. 4, we describe our proposed method, RTGB-RBM, along with its learning technique, and explain the process of extracting rules between hidden layers from the trained model. Section 5 evaluates the proposed method using the Bouncing Ball, Moving MNIST, and dSprites datasets. We compare the predictions of RTGB-RBM with those obtained from existing methods and demonstrate the superior performance of our proposed method. We also introduce a method for reducing the model size without compromising prediction performance by pruning unimportant nodes based on rule importance. Finally, Sect. 6 concludes the paper by summarizing the main contributions, discussing the limitations of our approach, and outlining potential avenues for future work.

Related Work
This Section discusses details of related work on Boltzmann Machines, dynamics learning, rule learning.

Boltzmann Machines
Boltzmann Machine (BM) is a probabilistic neural network and can be considered a type of Hopfield network using a statistical variation [25]. It possesses fascinating theoretical aspects due to its Hebbian rule of the learning algorithm and its connections to physics. However, the original BM suffers from exponential training time, rendering it impractical for large-scale real-world datasets. To address this, the restricted Boltzmann Machine (RBM) [16] was proposed, which prohibits connections between same layers. Additionally, an efficient training method called Contrastive Divergence (CD) [17] has been developed specifically for RBM. RBM consists of a visible layer that handles observed data and a hidden layer that handles latent features.
Several extended versions of RBM have been proposed, including the Gaussian-Bernoulli restricted Boltzmann Machine (GB-RBM) [21], capable of handling both realnumber and binary values, and the Recurrent temporal restricted Boltzmann Machine (RT-RBM) [22,23], designed for time-series data. By stacking multiple layers of RBM, the learning of hidden units becomes more efficient, leading to the emergence of Deep Boltzmann Machine (DBM) [26], which serve as the foundation for modern deep learning approaches. Furthermore, through persistent efforts, RBM has achieved expressiveness comparable to modern generative models such as Variational Autoencoder (VAE) and Generative Adversarial Network (GAN) [27][28][29]. In the realm of symbolic computation, such as Knowledge Representation and Reasoning (KRR), RBM has been utilized 1 3 to learn symbolic representations. For instance, [18,19] express propositional formulas using visible and hidden variables, inferring symbolicnowledge through maximum likelihood estimation of the RBM's energy function. The Logical Boltzmann Machine (LBM) [20] converts propositional formulas in DNF into RBMs to achieve efficient reasoning and establishes a connection between minimizing the energy of RBM and Boolean formula satisfiability. Notably, [18] applies these RBM-based symbolic knowledge extraction methods to images, bridging the gap between symbolic representations and real-world data. However, these approaches have not yet been applied to time-series data, such as videos, and there are still many aspects of learning and prediction that remain black box in nature.

Dynamics Learning
Many researchers have proposed various methods for learning dynamics, with recent years witnessing a surge in techniques based on deep learning. For instance, [30,31] employ neural networks to learn visual information from dynamics and make video predictions. Likewise, [32] utilizes audio data to reconstruct original audio and perform speaker recognition. Many of these methods are based on Variational Auto Encoder (VAE) [27], a generative model similar to Boltzmann Machines. These approaches extract hidden representations from inputs and leverage them to reconstruct dynamics, proving highly effective in learning dynamics. However, these methods often operate as black boxes, learning input-output relationships without providing insights into the internal workings of the networks. As a result, understanding important factors for predicting and reconstructing dynamics can be challenging. Several VAEbased methods have been proposed to address this, aiming to disentangle hidden variable dimensions and maximize the separation of independent information within the original data [33]. Additionally, [34] disentangles a latent representation into static and dynamic parts, while [35] treats the latent representation as a categorical condition to control the output. Disentanglement allows for a better grasp of the meaning behind each latent variable. Nevertheless, explicitly expressing these relationships as equations or rules remains challenging. Our method offers a unique contribution by acquiring interpretable rules that capture the dynamic nature and state transitions of latent information. This gives us deeper insights into the internal mechanisms of the network.

Rule Learning
While our main objective is to propose methods for learning dynamics, it is equally crucial to represent dynamics in an interpretable form. Symbolic methods, such as Learning from interpretation transition (LFIT) [15], have successfully learned dynamics in Boolean networks (BNs) [11] and Cellular Automata (CA) [12]. LFIT is an unsupervised learning algorithm that derives rules expressing dynamic relationships between observable variables from state transitions. However, LFIT is unable to handle dynamical systems that involve unobservable hidden or latent variables. Previous studies, including LFIT, have focused on describing dynamics based on observed variables. To address this limitation, some methods have combined LFIT with neural networks (NNs) to enhance robustness to noisy data and continuous variables. For example, NN-LFIT [36] extracts propositional logic rules from trained NNs, while D-LFIT [37] translates logic programs into embeddings, infers logical values through differentiable semantics, and utilizes optimization methods and NNs to search for embeddings. These frameworks learn state transition rules as normal logic programs (NLPs). It is crucial to consider the relationship between symbolic representation and computational complexity in terms of scalability [38].
Dealing with a large number of discrete variables, such as symbols, leads to an exponential increase in the number of possible combinations, posing computational complexity challenges. Therefore, it is essential to use computationally efficient algorithms, omit unnecessary inputs and redundant information, retain necessary information, and describe their relationships. Our proposed method addresses these challenges by simultaneously reducing input dimensionality, extracting essential information, and expressing relationships among the extracted information as rules. Despite having a large input dimension, our method mitigates combinatorial problems due to the reduced number of dimensions in the hidden layer.

Gaussian-Bernoulli Restricted Boltzmann Machine
Gaussian-Bernoulli restricted Boltzmann Machine (GB-RBM) [21] is defined on a complete bipartite graph as shown in Fig. 1. The upper layer is the visible layer V consisting represents real var iables directly associated with the input-output data, and h = {h j ∈ {+1, −1} | j ∈ H} represents hidden variables of the system that is not directly associated with the input-output data and is a discrete variable taking binary values. s = {s i | i ∈ V} is the parameter associated with the variance of the visible variables. The energy function of the GB-RBM is defined as  (3), given h , the probability of v i = v is calculated, and we can sample v i from the probability. By (4), given v , the probability of h j = 1 is calculated, and we can sample h j from the probability.

Recurrent Temporal Restricted Boltzmann Machine
A recurrent temporal restricted Boltzmann Machine (RT-RBM) is an extension of RBM [22,23] and is suitable for handling time series data. The RT-RBM has a structure with connections from the set of hidden variables from the past k frames {h t−k , h t−k+1 , ..., h t−1 } to the current visible variables v t and hidden variables h t . This paper assumes that the state at time t depends only on one previous time state t − 1 and fix k = 1 . The RT-RBM for k = 1 is shown in Fig. 2. In RT-RBM, in addition to parameters W , b and c , which are defined in Sect. 3.1, we newly define U = {u jj � | j, j � ∈ H} , which is the set of parameters that relate between the hidden variables at time t and t − 1 . These model parameters are collectively denoted by = {W, U, b, c} . Then, the expected value of the hidden vector ĥ t at time t is defined as, The conditional probability distributions of v t,i and h t,j are inferred by By (6), given h t and ĥ t , the probability of v t,i = 1 is calculated, and we can sample v t,i from the probability. By (7), given v t and ĥ t , the probability of h t,j = 1 is calculated, and we can sample h t,j from the probability.  t is calculated by (5). By (8), given h t and ĥ t , the probability of v t,i is calculated, and we can sample v t,i from the probability. By (9), given v t and ĥ t , the probability of h t,j = 1 is calculated, and we can sample h t,j from the probability. The architecture of RTGB-RBM is shown in Fig. 3.

Training
We update the parameters of RTGB-RBM so that the likelihood L is maximized.
The parameter = {W, U, b, c} that maximizes the product of v t is estimated by the gradient method = + logL . The gradients of each parameter are calculated as follows Learning w ij and b i increase the accuracy of reconstruction data in the visible layer from the hidden layer, and learning u jj ′ and c j increases the accuracy of predicting transitions. ⟨x⟩ data is the mean of x, ⟨x⟩ model is the expected value of x. To get the expected value ⟨x⟩ model , all combinations of x must be considered. However, since the number of combinations grows exponentially, it is not easy to compute all of them. Therefore, we approximate ⟨x⟩ model using Gibbs sampling.
Here we define v t (k) and h t (k) by repeating Gibbs sampling k times, and assume h t−1 is given. Then, the expected values are approximated by the Algorithm 1. By repeating Gibbs sampling, we get v t (k) and h t (k) from v t (0) and h t (0) as, If we take k large enough, we can approximate the expected values as ⟨v . Although training using Gibbs sampling requires a large k, in this study, we use the Contrastive Divergence (CD) method [39] for efficient training. In the CD method, we set v t (0) = v (n) t instead of randomly initializing v t (0) , which makes it possible to train networks even when k is small.

Extracting State Transition Rules
We extract state transition rules from the trained RTGB-RBM.
Definition (State Transition Rule) A state transition rule describes the transition from the state at time t to the state at t + 1 by a rule.
where, m is the number of hidden variables in each hidden layer, L t,j (1 ≤ j ≤ m) is a literal that represents a hidden variable h t,j or its negation ¬h t,j . L t+1 is the head of the rule and L t,1 ∧ L t,2 ∧ ....L t,m are the body of the rule. p j is the probability of occurring the j-th rule.
The rule (11) means " L t+1 become true with the probability of p j , if all of L t,1 , L t,2 , ....L t,m are true." To extract rules of the form (11) from the trained RTGB-RBM, we convert the network parameters to rules by Suppose that the hidden unit h t+1,j is connected to the hidden unit h t,j ′ by the weight u jj ′ . Then, if u jj ′ ≥ 0 , the positive literal h t,j ′ is added to the j-th rule. Conversely, if u jj ′ < 0 , the negative literal ¬h t,j � is added. The equation (13) shows that the larger (− ∑ j u jj � ) is, the higher probability of h t+1,j being activated. Conversely, the less (− ∑ j u jj � ) is, the lower probability of h t+1,j being activated. The extracted rules represent the temporal relationship between the hidden layer at time t and t + 1 . State transitions in the hidden layer can be calculated using the extracted rules, and once the state of the hidden layer is determined, the state of the visible layer can be decoded using equation (8).

E x a m p l e S u p p o s e
w e g e t a r u l e p 1 ∶∶h t+1,1 ← h t,1 ∧ h t,2 ∧ ¬h t,3 as shown in Fig. 4. This rule represents that, if h t,1 = 1, h t,2 = 1, h t,3 = 0 , then h t+1,1 will be 1 with the probability p 1 . The value of h t+1,1 is determined Fig. 4 An example of rule extraction by (12). In this case, u 11 and u 21 are greater than 0, so the positive literals h 1 and h 2 are added to the body of the rule. Conversely, u 31 is less than 0, so the negative literal h 3 is added 1 3 based on this rule. In the same way, h t+1 can be determined by applying rules to h t+1,2 and h t+1,3 respectively. Then, by (8), v t+1 is decoded from h t+1 .
The overview of our method is illustrated in Fig. 5. We have two types of predictions: model-based predictions and rule-based predictions. Given the observed state sequence {v 0 , v 1 , ..., v t } as input, the former predicts future states using RTGB-RBM and the latter predicts future states by interpretable rules (12) extracted from the trained RTGB-RBM.

Experiments
We conduct experiments on three datasets, Bouncing Ball [40], Moving MNIST [41] and dSprite [33], to evaluate our proposed method. The number of sequences in the dataset (N), the number of frames in each sequence (T), and the number of pixels in each frame (n) are described in Table 1 for these three datasets. The nature of each dataset are different. For example, for Bouncing Ball, we only need to predict the shape, position, and velocity of a single-color ball. On the other hand, in Moving MNIST, we need to learn ten different shapes (0-9), which increases the difficulty of shape reconstruction. In addition, dSprite requires learning not only position and shape but also color, size, and rotation. Because of the different attributes and learning difficulty of each dataset, we changed the number of hidden variables and the number of training epochs for each experiment.
In each experiment, we first train the RTGB-RBM and then extract rules from the trained RTGB-RBM. Then, the hidden units with low contribution are pruned from the network, using the probability of the rule as a contribution to the prediction. Finally, we confirm that removing hidden units with low contribution did not reduce prediction accuracy much. The hyperparameters necessary for each experiment, such as the number of hidden variables and the number of sampling iterations, are described in the sub-sections corresponding to each experiment.

Setting
The Bouncing Ball dataset is generated by the neural physics engine (NPE) [40]. The dataset is a simulation of multiple balls moving around in a two-dimensional space surrounded by walls on all four sides. The number of balls, radius, color, speed, etc., can be changed. We experimented with two types of videos, with only one ball and with three balls. In both cases, we generated 10000 sequences, each containing 100 frames of 100x100 pixels. For evaluation, we set N = 10000 , T = 90 , and T � = 100.

Training
We experimented with the one-ball and thre-ball cases. In both cases, the number of visible units is set to 10000, the number of hidden units to {10, 30, 100} , and the number of CD iterations K to {3, 10, 20} . Learning curves are shown in Fig. 6. Fig. 6 shows that the larger the number of hidden units, the better the prediction performance. It also shows that the three-ball case is harder to learn the dynamics than the oneball case, and the number of CD iterations does not make a big difference, but less is better. RTGB-RBM is trained rapidly in the first epoch, and learning progresses slowly from the second epoch. The results show that the proposed method can learn the dynamics of bouncing balls in the early phase.
An example of the prediction by the trained RTGB-RBM is shown in Fig. 7. In the case of one-ball, our model can predict ball-to-wall collisions. In the case of three-ball, our model can also predict ball-to-ball collisions. This result shows that our model can learn the ball's trajectory but also the concept of collision.

Relationship between rules and features
In this subsection, we show the relationship between the extracted rules and features which RTGB-RBM learned. To get a visual understanding of what the extracted hidden variables represent, we compute the feature map by applying the weight W to each hidden unit by (15).
The feature maps of v for the one-ball case calculated by (15) are shown in Fig. 8. These feature maps imply the ball's position and direction. For example, the map in the top left corner represents that the ball is located near the center of the lower side. The middle map above represents the ball moving from left to right. By combining these features, our model predicts the trajectory of the ball.
We evaluate the rule (16) as an example. For simplicity, negative literals are removed here. Corresponding the extracted rules with the feature maps, we get Fig. 9. The rule (16) represents that if h t,0 , h t,1 , h t,2 , h t,6 = 1 and h t, 3 , h t,4 , h t, 5 , h t, 8 , h t,9 = 0 , then h t+1,3 become 1 with probability 0.8732. Figure 9 implies that h t,0 , h t,2 , h t,6 represent features that are trying to move in the lower right direction, and h t,3 has a large value in the lower right corner.
In Fig. 9, the rule represents that the next feature map in the head is generated by combining the ball's current position and direction, represented by the four features in the body. We extract such rules for all hidden variable state transitions. In Fig. 10, we show an example of predicting v t+1 from v t by using extracted rules. If we apply the learned rules to h t = [1, 1, 1, 0, 0, 0, 1, 0, 0, 0] , we get h t+1 = [0, 1, 0, 1, 0, 0, 1, 0, 0, 0] , then v t+1 is decoded from h t+1 .
Although our rules can represent temporal relationships between hidden layers, the rules can not include the relationship between the hidden and visible layers nor the relationship between visible units in the same layer. Therefore, when obtaining the state of the visible layer, the state must be decoded by Gibbs sampling based on the parameters of the network. Through this mechanism, the essential information of the dynamics is determined by interpretable rules, while the visual information is decoded by probabilistic inference.

Comparative Experiment
We compare RT-RBM, RTGB-RBM, and rule-based predictions. The learning curves of those three models are shown in Fig. 11. The results show that in the one-ball case, both The deterioration in rule-based prediction is more distinct in the three-ball case than in the one-ball case, probably because the dynamics in the three-ball case had more intrinsic information that could not be expressed as rules. Fig. 12 (left) shows the prediction of RTGB-RBM and prediction using rules when h = 10 . In one-ball case, the trajectory of the ball is predicted by the rules and the RTGB-RBM. On the other hand, in the three-ball case, predictions by RTGB-RBM are increasingly far from ground truth, and predictions by rules are no longer on the same trajectory as grand truth. This result shows that rules h = 10 are sufficient to predict the trajectory of the one-ball, but they are not expressive enough to predict the trajectory of the three-ball.
On the other hand, when h = 10 , RTGB-RBM and rulebased predictions are improved, and they predict the ball's trajectory and bounce in Fig. 12 (right). This result indicates that the prediction accuracy increases as the number of hidden units increases, even in rule-based prediction. As the number of hidden variables increases, more state transitions can be expressed by rules. This result indicates that some hidden units are needed to predict the dynamics.
The rule-based method has lower prediction accuracy than the other two methods, and it predicts the ball's trajectory for the first five or six steps, gradually deviating from the ground truth. One possible reason is that since transitions based on rules occur with probability, noise and error increase as the state transitions forward, leading to incorrect predictions. Furthermore, it is difficult to describe the dynamics of multiple objects, such as three-ball, in our rule form. We guess the rules for each ball are not expressive enough, and we need to use rules that can represent interactions and relationships between balls. For example, the rules must represent each ball's position, direction, and collision.
Although there are still limitations described above, our method can predict the state transition of the dynamics and learn rules between hidden layers that correspond to the feature maps. Using hidden variables can reduce the size of the state transition. While the original dynamics consist of many visible variables, this method can represent dynamics with few rules. Furthermore, since the rules are expressed in the form (11), they are interpretable, e.g., Fig. 9, yet rule-based predictions are comparable to RTGB-RBM predictions, e.g., Fig. 11 (left).

Network pruning
We evaluate each hidden unit's contribution based on the p j of the j-th rule and prune some units from the network with low p j . This process allows lightweight inference while preserving the information necessary to reconstruct the original dynamics. However, since some information loss is expected due to pruning, this subsection evaluates the relationship between the number of remaining hidden units and their reconstruction error. Fig. 13 shows the relationship between loss and the number of remaining hidden units when h = 100 . For example, the loss at h = 60 represents the prediction loss by the network with 40 hidden units with low p j removed. This graph shows that prediction performance does not decrease significantly even if about 90 out of 100 hidden units are removed. Figure 14 shows an example of prediction for the case of 70 and 90 hidden units pruned from 100 hidden units. The ball position and trajectory can be predicted well in both cases, although the prediction performance is dropped compared to the original network. This result shows that making state transitions interpretable by rules is also useful for network pruning.

Moving MNIST
We evaluate our method on Moving MNIST [41], which is more complex than the Bouncing Ball dataset. Moving MNIST consists of sequences of MNIST digits. It contains 10, 000 sequences; each sequence contains 20 frames, which consists of 2 digits moving in 64x64 pixels. Originally, each pixel takes a value of 0 to 1, but to outline the digit more distinct, we threshold each pixel value at 0.1. In Bouncing Ball, there were ball-to-ball collisions and wall-to-ball collisions, but in Moving MNIST, there are only wall-to-digit collisions but no digit-to-digit collisions; the digits pass through each other. Therefore, we need to learn to reconstruct different digits, predict trajectories, wall-to-digit collisions, and overlaps.
The learning curves for RT-RBM, RTGB-RBM, and rule-based on moving MNIST are illustrated in Fig. 15. We trained networks with the first 5 frames as input and the remaining 15 frames as predictions, in 100 epochs. Figure 15 shows that RTGB-RBM prediction is better than RT-RBM.
In addition, the rule-based method approached the accuracy of RTRGB-RBM and finally became more accurate than RT-RBM. This result indicates that as the network parameters are optimized through training, the expressiveness of the extracted rules is increased, and the loss of information due to extraction is reduced.
An example of RTGB-RBM and rule-based prediction is shown in Fig. 16. There are numbers 2 and 9. Our methods reconstruct the different shapes of the numbers and predict their trajectories. When the digits overlap, the prediction becomes ambiguous, and it seems impossible to distinguish the digits from the ambiguous frame. Nevertheless, it can predict the trajectory after overlap. From the result, we guess the hidden layers contain some features to distinguish the digits, and the transitions between hidden layers also contain enough information to reconstruct their trajectories after overlap. In addition, although the rule-based prediction is less accurate than the RTGB-RBM prediction, it can predict trajectories because the essential information is preserved in extracted rules.
The relationship between network pruning and prediction performance is shown in Fig. 17. This graph shows that the predictions do not decrease significantly until about 400 hidden units are removed. This result indicates that about 600 hidden units are needed to adequately represent the dynamics of Moving MNIST. Figure 18 shows that the shape and trajectory of digits are well predicted with 600 hidden units, but the shape is not well predicted with 100 hidden units.  These results indicate that the 100 hidden units still contain information for predicting trajectories but not preserving the shape. In other words, the pruned 900 hidden units contained information to preserve shape. By pruning the network based on rules, we can evaluate which units are effective for prediction.

dSprite
The dSprites dataset [33] was created to assess the disentangled properties of unsupervised learning methods. The dataset contains image sequences of 2D shapes (square, ellipse, heart) procedurally generated from six ground-truth independent latent factors. These factors are sprite color, shape, scale, rotation, and X and Y position.
To simplify the experiment, we modified the original dSprite to consist of 3 different shapes (square, oval, heart), 6 values for the scale, 40 values for orientation, and 4 colors (white, red, green, blue) and limited the motion to simple horizontal or vertical. For each sequence, the scale gradually increases or decreases with each frame. Similarly, after the initial random selection for orientation, subsequent frames rotated the shape clockwise or counterclockwise at each frame. The final data set consists of approximately a hundred thousand data points. Each sequence includes 8 frames, each frame consisting of 32x32 pixels. In this experiment, our method needs to learn multiple attributes (color, shape, size change, rotation, position, and direction of movement).
Learning curves for RTGBR-RBM and rule-based predictions are shown in Fig. 19. In dSprite, input values are not binary but take values 0-255, so RT-RBM, which can handle only discrete values, cannot be applied to dSprite. Figure 19 shows that our methods' performance becomes stable after  150 epochs of training. We found that our method can learn multiple attributes from Fig. 20. Even the rule-based method can reconstruct and predict these properties. This result indicates that in the dynamics of dSprite, the attributes (color, shape, size change, rotation, position, and direction of movement), which are essential information, are preserved even in rule forms.
The results of the network pruning are shown in Fig. 21. The graph shows that the prediction is relatively well even when around 900 hidden units are pruned, but the prediction deteriorates significantly when pruning more than 900 hidden units. Figure 22 shows the prediction for the rotating red ellipse. The results show that even with 100 hidden units, the color, shape, and rotation are successfully predicted, even though loss has increased by 10 times. Each frame is 32x32x3 dimensional data. After pruning, the network can predict the dSprite dynamics by 100 units, which is about 1/30th of the input dimension. This is probably because the importance of rules is higher for units that are essential to the prediction, so even if the less important hidden units are pruned, the network which is essential to the prediction remains. From these results, 100 remaining units and their transitions contain enough information to predict these attributes.

Discussion
Our experimental results demonstrate the effectiveness of our method in learning system dynamics, offering several notable advantages.
Firstly, our approach allows for extracting interpretable rules that capture the underlying dynamics as state transition rules between hidden variables. This enhances our understanding of the system's behavior and provides valuable insights into the relationships between variables.
Additionally, our method excels in predicting unobserved future states based on observed state transitions, showcasing its ability to capture the temporal evolution of the system. Another significant advantage of our method is its flexibility in handling both continuous and discrete values. Unlike many existing approaches that struggle with converting continuous data into discrete values, our method seamlessly integrates both data types, enabling a more comprehensive analysis of dynamics. This capability proves particularly valuable in real-world applications where data often exhibit mixed types.
Finally, our method offers a unique advantage by learning interpretable rules that effectively capture system dynamics. By combining interpretability, predictive accuracy, and flexibility in handling different data types, our approach opens new avenues for understanding and leveraging dynamics in various scientific and engineering domains.
Nevertheless, we also acknowledge certain limitations that warrant further attention. While our learned rules are interpretable, understanding the meaning behind each rule, particularly concerning attributes such as color or shape information, can be challenging. To address this, future work will focus on refining our rule-based method, including developing a more structured representation capable of handling multiple attributes simultaneously. Additionally, we aim to extend our rule form to incorporate visible and hidden variables and continuous and discrete values. Furthermore, we recognize the importance of disentangling the interpretations of each variable within the rules. Currently, our rule extraction process does not consider the weights of individual hidden units, treating units with both small and large weights as simple literals in the rules. To address this, we will explore using more expressive rules considering the weights, contributing to a more comprehensive and accurate understanding of the learned dynamics.

Conclusion
In this study, we proposed RTGB-RBM, a method combining GB-RBM and RT-RBM to handle continuous visible variables and time dependence between discrete hidden variables. We also introduced a rule-based approach to extract essential information as hidden variables and represent state transition rules in an interpretable form. Furthermore, we developed a network pruning method to evaluate the contribution of hidden units based on the rules, retaining only those units containing essential information about the dynamics.
Our experimental results demonstrated the effectiveness of our methods in predicting future states of Bouncing Ball, Moving MNIST, and dSprite datasets. Furthermore, by correlating the learned rules with the features represented by the hidden units, we discovered that these rules contain crucial information for determining the future state based on the current information. Additionally, through the network pruning process, we assessed the number of hidden units required to represent the dynamics.
Overall, our proposed method offers the potential to extract latent relationships as rules without the need for prior discretization or preprocessing. We plan to extend and refine our approach based on the insights gained from this study, conducting more comprehensive experiments on various dynamic datasets. Theoretical analysis will also be pursued to provide a deeper understanding of the effectiveness of our method.