Multi Sentence Description of Complex Manipulation Action Videos

Automatic video description requires the generation of natural language statements about the actions, events, and objects in the video. An important human trait, when we describe a video, is that we are able to do this with variable levels of detail. Different from this, existing approaches for automatic video descriptions are mostly focused on single sentence generation at a fixed level of detail. Instead, here we address video description of manipulation actions where different levels of detail are required for being able to convey information about the hierarchical structure of these actions relevant also for modern approaches of robot learning. We propose one hybrid statistical and one end-to-end framework to address this problem. The hybrid method needs much less data for training, because it models statistically uncertainties within the video clips, while in the end-to-end method, which is more data-heavy, we are directly connecting the visual encoder to the language decoder without any intermediate (statistical) processing step. Both frameworks use LSTM stacks to allow for different levels of description granularity and videos can be described by simple single-sentences or complex multiple-sentence descriptions. In addition, quantitative results demonstrate that these methods produce more realistic descriptions than other competing approaches.


Introduction
Describing video content using natural language has emerged as a formidable challenge, capturing significant attention in recent years.Despite the considerable advancements showcased in the "Related Works" section, the field continues to grapple with Figure 1: An example of the performance of the hybrid method in producing detailed, multiple-and single-sentence descriptions of a screwing scenario [6] containing 8 video snippets (1,...,8).Detailed sentences: For left hand: (1) The left hand touches the top of a screwdriver on the table .(2) They untouch the table .(3) They move together above the table.(4) They touch a hard disk on the table.(5) They move together inside of a hard disk on the table.( 6) They untouch the top of the hard disk and move together above the table.(7) They touch the table.(8) The left hand untouches the top of the screwdriver on the table.For right hand: (1, 2) Idle, (3) The right hand touches near ("around"-relation) the hard disk on the table.(4) The right hand continues to touch near the hard disk on the table.(5) The right hand untouches near the screwdriver from the table.(6,7,8) Idle.Multiple sentences: For left hand: (1.2.3.4)The left hand picks up a screwdriver from the table and places it on a hard disk.(4.5)They perform screwing in the inside of the hard disk on the table.(6.7.8)The left hand leaves the screwdriver on the table.For right hand: (3.4)The right hand keeps touching near the hard disk on the table.(5) The right hand leaves the hard disk on the table.One sentence: For left hand: The left hand performs screwing inside of a hard disk on the table by a screwdriver.For right hand: The right hand holds the hard disk and then lets go.
significant challenges, with deep learning-based methodologies predominantly driving progress.
This study specifically targets videos of human manipulation actions, which are complex but crucial for applications in robotics, especially for learning from demonstrations.Videos and language interactions between humans and robots are becoming more common, making it important for machines to understand and generate clear and relevant sentences.However, human manipulation actions are complex and need a careful approach to arrive at good video descriptions.
In response to this need, we have introduced two distinct methodologies to tackle Figure 2: An example of the performance of our end to end method in producing detailed, multiple-and single-sentence descriptions of a wiping scenario [6] (8) The left hand leaves the sponge on the table.For right hand: (4.5.6.7)The right hand picks and places a cup on the table.(8) The right hand then leaves the cup on the table.One sentence: For left hand: The left hand wipes the table by a sponge.For right hand: The right hand lifts and puts a cup on the table .this issue.The first is a hybrid statistical method, which adeptly combines video analysis techniques with the generation of simplified semantic representations (SRe), seamlessly integrated with a stack of Long Short-Term Memory (LSTM) networks.On the other hand, our second approach is an end-to-end trained LSTM-stack, designed to operate directly on the video data.While the latter demands a substantial volume of training data to function optimally, the hybrid method significantly reduces this requirement through its use of SRe-representations, without compromising on the quality of the generated descriptions.Both methods exhibit exemplary performance, surpassing existing state-of-the-art solutions in the field of video description generation.
Our approach stands out because it can create different levels of descriptions, from very detailed to more general statements about the video.This flexible approach, shown in Figs. 1, 2, lets users choose the level of detail they need.Whether it is creating a detailed instruction manual from a video for training purposes or a general story for a movie, our method adapts to provide the right level of detail.
The capability to generate multi-level descriptions is of paramount importance in robotic applications and learning from demonstration.Robots often operate in diverse and dynamic environments, requiring them to understand and execute complex tasks.The multi-level representations (descriptions) generated by our method enable robots to comprehend the intricacies of human actions at various granularity levels, facilitating a more nuanced understanding and execution of tasks.Naturally, for any intrinsic understanding of actions and its sub-action the robot does not have to utter any sentence and only make use of the descriptive depth per se.This is especially beneficial in learning from demonstration, where robots learn to perform tasks by observing human actions.The multi-level representations provide a rich semantic context, aiding the robots in deciphering the intent and subtleties of human actions, leading to more accurate and efficient task execution.The possibility of uttering a respective phrase will then, however, also allow the machine to engage in discourse, possibly asking detailed questions about different action steps.

Related Works
Research on video description has evolved into three main categories: "classical methods," "statistical algorithms," and "deep learning-based approaches." Classical Methods The earliest approaches to video description relied on SVO (Subject, Object, Verb) triple-based methods.These methods comprised two fundamental stages: content identification and text generation.Classical computer vision (CV) and natural language processing (NLP) techniques were employed to detect visual entities in videos and map them to standard sentences using handcrafted templates.However, these rule-based systems were effective only in highly constrained environments and for short video clips [16,12].

Statistical Algorithms
To overcome the limitations of rule-based methods, researchers turned to statistical approaches.These methods utilized machine learning techniques to convert visual content into natural language descriptions using parallel corpora of videos and their associated annotations [29,11].They initially involved object detection and action recognition through feature engineering and traditional classifiers, followed by the translation of retrieved information into natural language using Statistical Machine Translation (SMT) techniques [15].However, this separation of stages made it challenging to capture the interplay between visual features and linguistic patterns, limiting the ability to learn transferable relationships between visual artifacts and linguistic representations [1].
Deep Learning-Based Approaches The success of deep learning in both computer vision (CV) and natural language processing (NLP) inspired researchers to adopt deep learning techniques for video description tasks.Vinyals et al. [35] introduced an image description model based on an encoder-decoder architecture, using Convolutional Neural Networks (CNNs) as image encoders and Long Short-Term Memory networks (LSTMs) as language decoders.Building on this, Venugopalan et al. [34] proposed a similar framework for video annotation, where features were extracted from each video frame, and LSTM layers were used to generate textual descriptions.However, this approach struggled to capture the temporal order of events in a video.
Subsequent advancements included Pan et al.'s unified framework with visual semantic embedding [24], which extended to exploit temporal information in videos [23].Yu et al. [39] introduced hierarchical Recurrent Neural Networks (RNNs) for video captioning.Wang et al. [36] designed a reconstruction framework with a novel encoder-decoder architecture, leveraging both forward (video to sentence) and backward (sentence to video) flows for video captioning.
Krishna et al. [18] suggested a two-stage method for describing short and long events, where events in a video were first detected, and descriptions were generated based on event dependencies.Nian et al. [22] presented an enhanced sequence-tosequence model for video captioning, incorporating a mid-level video representation method known as the video response map, in addition to CNN features from sampled frames.
Recent advancements in video description have extended beyond traditional deep learning-based methods, starting the new era of transformer-based models.These transformer-based approaches have emerged as a distinct branch within the realm of deep learning, offering unique and promising capabilities for generating coherent and contextually relevant text descriptions for videos.
Transformer-Based Paradigms Pioneering this paradigm shift, Ji et al. introduced the groundbreaking Action Genome framework [13].Within this framework, actions are represented as intricate compositions of spatio-temporal scene graphs.This innovative approach empowers the generation of rich and structured descriptions that encapsulate the nuances of human activities, providing a fresh perspective on video description.
Subsequently, Kim et al. presented "ViLT" [14], a vision-and-language transformer model that redefines the landscape of video captioning.What sets ViLT apart is its ability to autonomously learn the art of generating video descriptions, all without relying on traditional convolutional layers or region supervision.This marks a significant leap towards harnessing the full potential of transformers in the domain of video captioning.
Another noteworthy contender in the transformer-based video captioning arena is "SwinBERT" [19].This model incorporates sparse attention mechanisms, a novel approach that enhances efficiency without compromising performance.SwinBERT aims to strike an optimal balance between computational efficiency and captioning accuracy, contributing to the ongoing evolution of video description methods.
Looking ahead, the field of transformer-based video description continues to evolve, with ongoing research exploring new directions and potential solutions.Recent developments include Fu et al.'s framework as a fully end-to-end VIdeO-LanguagE Transformer (VIOLET), which adopts a video transformer to explicitly model the temporal dynamics of video inputs [8].Then then present an extension of the VIOLET framework [8] where the supervision from Masked visual modeling (MVM) training is backpropagated to the video pixel space [9].In another study, Gu et al. [10] proposes a twostream transformer model, TextKG, for video captioning.One stream focuses on external knowledge integration from a pre-built knowledge graph to handle open-set words, while the other stream utilizes multi-modal information from videos.Cross-attention mechanisms enable information sharing between the streams, resulting in improved performance.
Transformer-based models excel at generating coherent single-sentence video descriptions but may face challenges when crafting multi-level descriptions of manipulation actions, especially in complex scenarios.Transformers are inherently optimized for single-sentence descriptions and may not naturally accommodate the intricacies of manipulation actions, which often involve decomposing actions into atomic components and organizing them into complex sequences.
By contrast, our proposed hierarchical approach is tailored to address the nuances of manipulation actions, offering the flexibility to generate multi-level descriptions that span from detailed atomic actions to more concise narratives.
While transformer-based methods are strong in language generation and comprehension, direct comparisons with our approach may not be straightforward due to their single-sentence focus.The choice between these approaches should align with the specific task requirements, whether it entails generating single-sentence descriptions or multi-level, detailed descriptions of manipulation actions.
Note that at the end of the Methods section we are discussing some more approaches in greater detail, which we are using for comparison with our results.Figure 3: Flow diagram of both methods.Top (blue arrows): Hybrid method, bottom (orange arrows): end-to-end method, where the end to end method uses the temporal (start-and end-) information from the segregation into complex actions from the toppath.Similar to [33] we unroll the LSTMs to a fixed 80 frames (using zero padding or frame drop in case of too short or too long video snippets) as this number offers a good trade-off between memory consumption and the ability to feed many video frames to the LSTM.The operations in the gray box are repeated n times, where n is the number of atomic actions needed to desribe one given complex action.Furthermore, we use a stack of k LSTMs to arrive at different levels of description granularity.The role of both variables is explained in more detail in the text.

Methods
The goal of the study is to provide a framework for describing manipulation actions in video using human language sentences at different levels of granularity.We will here compare two new methods.One of our methods (Method 1) first computes novel semantic representations capturing spatio-temporal relations between objects from the video stream and then uses them as a front-end to a neural network.The other (Method 2) is directly using an end-to-end-trained network combined with a novel approach for generating action proposals in time.Figure 3 provides an overview over these methods where we will in the following sections describe their different components in detail.

Method 1: Hybrid Approach
This method embodies a hybrid model that synergistically integrates a statistical semantic pre-analysis of actions with the temporal processing capabilities of Long Short-Term Memory (LSTM) networks.The statistical approach aims at distilling the inherent semantics of various actions, facilitating a structured breakdown of complex actions into simpler, atomic units.Subsequent processing by the LSTM aids in capturing the temporal dependencies and nuances within sequences of actions.A significant advantage of this hybrid approach lies in its efficiency: it demands a relatively limited Figure 4: (a) The list of static spatial relations (SSRs) are "Above (Ab)", "Below (Be)", "Around (Ar)", "Inside (In)", "Surround (Su)", "Cross (Cr)", "Within (Wi)", "Partial within (Pwi)", "Contain (Co)" and "Partial contain (Pco)" while "Above", "Below" and "Around" relations in combination with "Touching" were converted to "Top (To)", "Bottom (Bo)" and "Touching Around (ArT)", respectively.The dynamic spatial relations (DSRs) are "Getting close (Gc)", "Moving apart (Ma)", "Moving together (Mt)", "Halting together (Ht)", "Fixed-moving together (Fmt)" and "Stable (S)", (b) A sample of a convex hull, (c-e) samples of some static spatial relations defined in this paper.If we assume the green cube as α and the blue cube as β, then: (c) SSR(α, β) = Cr and SSR(β, α) = Cr, (d) SSR(α, β) = W i and SSR(β, α) = Co, (e) SSR(α, β) = P wi and SSR(β, α) = P co dataset for effective training, making it feasible for applications where large volumes of annotated data might be scarce or challenging to obtain.
4.1 Object Characteristics and Pre-processing (Fig. 4) To craft accurate and descriptive sentences about a given environment, object recognition is pivotal.This happens in two distinct stages: 1. 2D Pre-processing Using RGB Images: -Objects are detected employing YOLO [27], which has been trained on the labeled objects from the pertinent datasets.
-Given the intricate motion and positions of human hands, OpenPose [3] is requisitioned for their detection.Utilizing key points delineated by OpenPose, a 2D bounding box for each hand is determined.
The result of this stage is a compilation of 2D bounding boxes: one set representing objects identified by YOLO and another for hands discerned via OpenPose.

3D Processing and Spatial Relation Inference:
-Processing with Depth Information: For datasets that contain depth metrics, the 2D data harvested from the preceding stage is amalgamated with point clouds extracted from depth images.This synthesis leads to the generation of 3D bounding boxes for objects, facilitating a more granular comprehension of their spatial inter-relationships.
-Inferring 3D Relations from 2D Imagery: For datasets that solely offer 2D information without accompanying depth data, our method taps into the potential of the 3D-R2N2 approach [4].The essence of the 3D-R2N2 method is its capacity to employ deep learning models to reconstruct 3D objects from 2D images.More specifically, it utilizes a recurrent neural network (RNN) that integrates information from multiple views of an object (or even a single view) to produce a consistent 3D shape.This network is trained using a combination of synthetic and real images, allowing it to make predictions about the 3D structure of objects seen in diverse 2D images.Hence, even in the absence of explicit depth information, 3D spatial relationships between objects can be inferred.
The result of this is a three-dimensional representation of the scene, which augments the system's spatial awareness and allows for the completion of atomic action quintuples, as a three-dimensional spatial relationship is a fundamental element of an atomic action's definition.
Our method relies on a purely data driven recognition of actions using computer vision.This is achieved by recognizing the time-chunks and the structure of so-called atomic actions, which are at a later stage concatenated into real action-names.Atomic actions are defined by the following quintuple: (Subject, Action-Primitive, Object, Spatial Relation, Place).For example, if a hand touches the top of a cup on a table, its corresponding quintuple will be: (hand, touch, cup, top, table).In the following, we will give a brief reference to each of these five items.
1.The subject has two states and can be "Hand (H)" or "Merged entity (Me)".
Merged entity is used when the hand and its touched object act as a same entity and perform an action on another object.

2.
The action primitive has four states.Two of those are "Touching (T)" and "Untouching (U)", which are used when one object starts touching or untouching another object, respectively.The next two states are "Moving together (Mt)" and "Fixed moving together (Fmt)" for the times that two objects move together or one is fixed and the other is moving on the fixed one.These primitive states are enough to create the required semantic representations, where real actions, like cutting, stirring, etc., need to be captured (see below).
3. The object has four constituents, which are "(O 1,2,3 ), G" (Object 1,2,3 and Ground) where the latter supports all other objects except the hand in the scene.O 1 , O 2 and O 3 are the objects which are the first, second and third to display a change in their T/N relations, respectively.In [43], it has been discussed that there are never more than these four constituents (+ the hand) existing in any manipulation action.
4. The spatio-temporal relation between two objects (e.g.subject and object) has thirteen states (see Fig. 4).Below we describe how those are computed.

5.
The place has five states.An action can occur on the surface of another object (O 1 to O 3 ), on the ground, or in the air (Air).

Spatio-temporal relations: Item 4 from above
To perform this step we determine the convex hulls of all objects recognized in the video using the "gift wrapping algorithm [31]".Then we compute their spatio-temporal relations by standard set-calculus (see Supplement).Consequently a set of static as well as dynamic relations between the different objects can be determined in a straight forward manner summarized in Fig. 4.

Mapping complex actions to atomic actions
Above we had described that we use action primitives with only four states as one component in the atomic action tuple, where these four states can all be recognized directly in a video.Real actions, to which one needs to associate a characteristic action word for video description, like cutting, stirring, pick&place, consist of a string of atomic actions.In a pre-processing step we used a context free grammar (see Supplement) to decompose real actions into their constituting atomic actions.As a result any complex action, for example "cut" is then represented as a string of atomic actions AA i , e.g., cut = [AA 1 , AA 2 , . . ., AA n ].This processing step had been done offline and the resulting mappings have been stored in a so-called library of action mappings.

Semantic Representations: SRe
Semantic representations use the same 5-tuple structure as in the atomic action: (Subject, Action, Object, Spatial Relation, Place).Different from atomic actions, in the SRe real action(-names) are being used.This creates a nested system of the kind, f.e.
Object, SpatialRelation, P lace) where each atomic action takes the structure of AA=(Subject, Action Primitive, Object, Spatial Relation, Place).Evidently, entities from atomic actions can reoccur in the SRe.Importantly, atomic actions can be easily recognized from video, because of the small number and simple structure of the action primitives (only 4).SRes are then derived from them as a string of AAs.As mentioned above we used a context free grammar (see Supplement) to pre-compute and decompose all complex actions in the data set into their constituting atomic action.

The core of the hybrid statistical method
As a main advantage, the hybrid method works with a modest amount of data.In the hybrid method, first we generate the semantic representation (SRe) of the visual content including subject (tool), action, object, spatio-temporal relations and place, then we model the semantic relationships between the visual components through learning a Conditional Random Field (CRF) structure.Finally, we propose to formulate the generation of natural language description as a decoding stage using two layers of LSTMs.
The LSTM framework allows us to model the video as a variable length input stream and creates output as a variable length sentence.
The start of a manipulation actions can be defined when the hand touches something and the end when it is free again [43].This way we can break a long video into snippets that contain only one action.Then we associate every component of the SRe-5-tuple to a graph node and create for every video snipped a fully connected graph with 5 nodes.The edges between the nodes will after training contain the probabilities of a co-occurrence of two nodes.For example in a cutting-SRe it shall be more likely to find "Subject=hand+knife" (i.e. the subject is here a merged entity) together with "object=apple" than with "object=lamp".
The training of the Conditional Random Field (CRF) stands is central to this method, marked by the need to choose an appropriate dataset.We draw upon 'The KIT Bimanual Actions Dataset' ( [6]) from which the CRF learns to extract relationships and to determine probabilities for all existing Semantic Representations (SRes).We emphasize that our choice of dataset is a calculated one, focused on the deployment of a quite limited dataset.This constraint serves to imbue our model with precision, preventing it from lapsing into an over-generalized state.
However, the decision to employ a limited dataset, while advantageous in terms of focus, introduces a potential challenge.With the restricted data at hand, there exists the risk of overestimating the probabilities of connections between nodes within our graphs.Such an overestimation can lead to inaccuracies when the CRF tries to predict connections for novel, previously unseen data.To mitigate this risk, we employ the Word Lattice Model ( [7]).
Formally, the Word Lattice Model is presented as a Directed Acyclic Graph, encapsulating an array of potential outcomes, each accompanied by a corresponding confidence level.Its primary utility, traditionally in the domain of machine translation, extends beyond decoding the most probable hypothesis.This is, because it considers alternative interpretations that, while bearing slightly lower confidences, remain plausible.Within our methodology, the Word Lattice Model emerges as a secondary line of verification.It scrutinizes the probabilities elucidated by the CRF and juxtaposes them with its own cartography of potential relationships.Should the probabilities derived from the CRF starkly contradict those inferred by the Word Lattice Model, we invoke a re-adjustment process.This entails a reordering of probabilities to ensure their alignment with empirical and logical expectations.
The steps of the re-adjustment process are as follows: 1. Discrepancy Evaluation: Initially, we perform a quantitative analysis to identify any significant discrepancies between the CRF's predicted probabilities and those of the Word Lattice Model.This analysis includes statistical tests to ascertain whether the differences are beyond acceptable margins, thereby warranting adjustments.

Identification of Outliers:
Probabilities that exhibit substantial deviations from those proposed by the Word Lattice Model are flagged as outliers.These outliers are then thoroughly examined to understand the underlying causes, such as data sparsity or bias in the training set.

Probability Recalibration:
The recalibration involves adjusting the outlier probabilities.The steps involved in this recalibration are as follows: -Empirical Evidence Gathering: We revisit the training set to gather instances that correspond to the outlier probabilities.This involves analyzing segments where the co-occurrence of nodes is either over-represented or under-represented.
-Reassessment of Contextual Factors: Each instance is evaluated in its contextual entirety, considering factors such as the frequency of action-object pairings in various contexts and the diversity of the actions performed.
-Expert Verification: Subject matter experts review the instances to confirm or dispute the validity of the co-occurrence probabilities.Their insights are crucial for determining whether to retain, increase, or decrease specific probability values.
-Adjustment Based on Consensus: If a consensus is reached that the current probability values are inconsistent with the empirical evidence and expert opinion, we employ a mathematical model to calculate the new probability.This model incorporates the frequency of the observed co-occurrences and the experts' qualitative assessments to produce a more balanced and representative probability value.
-Integration and Normalization: The newly calculated probabilities are integrated back into the CRF's model, followed by a normalization process to maintain the probabilistic model's consistency.
These steps ensure that each adjustment is not an arbitrary decision but is instead grounded in actual data and expert validation, leading to a CRF model that is both accurate and reliable.

4.
Validation and Iteration: Following recalibration, we conduct a validation phase using a held-out validation dataset, which serves as a new reference point for assessing the accuracy of the adjusted probabilities.If the validation results indicate that the adjustments have not yielded an improvement, we iteratively refine the probabilities.This iterative process is guided by a combination of empirical evidence and the feedback provided by the validation phase until the discrepancies are resolved.

Integration and Finalization:
Once the adjusted probabilities pass the validation phase, they are integrated back into the CRF model.The final model is then subjected to a comprehensive evaluation to ensure that the adjustments have enhanced its predictive performance and generalization capabilities.
Through this rigorous re-adjustment process, we maintain the integrity and applicability of our CRF model, ensuring that it accurately reflects the complex relationships within the data, despite the limitations imposed by a smaller training set.
For readers seeking a more profound understanding of the nuanced mechanics of the Word Lattice Model, we refer to [7], which provides a comprehensive exploration of its intricacies.
Following this data preparation process, where we carefully extracted, verified, and fine-tuned our training samples, these curated datasets are now incorporated into the initial layer of the Long Short-Term Memory (LSTM) decoder.This step serves as the gateway to a complex series of computations, laying the foundation for the generation of detailed and descriptive narratives.
While most existing video description methods produce only one sentence for each video clip, an important capability that we provide with our approach is the possibility of generating descriptions in the form of several coherent sentences for each action with different levels of descriptive detail.For this purpose, we use k (Fig. 3, right) parallel levels of LSTMs, where each LSTM level has two layers.To capture the lowest, most detailed descriptive level, the total number of LSTMs needed should equal the most complex string of atomic actions used to describe any real action in the data sets.The pre-analysis performed by us to create the library of action mappings yielded that the most complex action consisted here of 14 AAs.Hence, 14 levels of LSTMs are maximally needed.At the first, bottom level descriptions consist of a set of very detailed sentences, each of which describes just one single atomic action.Obviously such sentences are non-human like and very awkward, but we can now begin to concatenate atomic actions, where this process is guided by the entries in the library of action mappings, leading to sentences that describe concatenated groups of AAs by single action verbs.This is done until full concatenation.For every level that results this way one LSTM is trained, where often levels are left out due to the fact that not all AAs can be concatenated into a meaningful verb.As a result, this algorithm can produce coherent sentences describing manipulation videos at different levels.

Method 2: End-to-End Network
To address these constraints, we propose an end-to-end trainable framework designed to learn not only the temporal structure of input video sequences but also the intricate sequence modeling required for generating comprehensive multi-level descriptions.In the following sections, we will provide a detailed breakdown of the key components and steps in our end-to-end framework.

Data Collection and Annotation
Our atomic action recognition model relies on a selected collection dataset of videos which spans diverse domains to ensure a rich variety of actions, enhancing the model's versatility.For this we carefully selected a set of five diverse datasets: YouCook2, MSR-VTT, ActivityNet Captions, EPIC-Kitchens, and the KIT Bimanual Action Dataset, to ensure the richness and versatility of our training data.These datasets cover a wide range of activities and domains, providing a comprehensive representation of human manipulation actions.This diversity is essential for enhancing our atomic action recognition model's adaptability and robustness.To harness the full potential of these datasets and create a reliable foundation for our model, we employed a meticulous data annotation process.This process involves human annotators who systematically analyzed each video in these datasets.They followed specific guidelines and the atomic action definitions to label and describe actions within the videos.This annotation process is essential to provide precise information about the sequence of atomic actions in each video forming the basis for our atomic action recognition model's training.

Automatic Atomic Action Recognition
In this section, we present an overview of our automatic end-to-end atomic action recognition and description approach, which consists of three integral stages:

Visual Feature Extraction (CNN):
To initiate the process, we employ Convolutional Neural Networks (CNNs) for visual feature extraction.This stage serves as the foundation for our atomic action recognition and description approach.The choice of CNNs is rooted in their proven effectiveness in handling spatial information, a fundamental requirement for accurate atomic action recognition.Atomic actions, as elemental components of video narratives, often exhibit fine-grained spatial intricacies, such as subtle hand movements, object interactions, or body postures.CNNs are renowned for their ability to capture intricate spatial patterns, making them a natural and robust fit for our problem.
The input to this stage is a set of video frames F = {f 1 , f 2 , . . ., f n f }, where n f represents the number of frames in the video sequence.These frames contain rich visual information, encompassing the dynamic interplay of objects, scenes, and actors in the video.Our objective is to extract meaningful visual features from these frames.
To achieve this, we leverage the Visual Geometry Group (VGG) architecture, specifically exploring variants such as VGG16 and VGG19.VGG architectures are wellregarded for their architectural simplicity and remarkable proficiency in capturing spatial details.They have demonstrated exceptional performance in image classification tasks, making them a compelling choice for feature extraction in our context.
The output of this stage is a sequence of feature vectors X = {x 1 , x 2 , . . ., x n f }, where x i represents the feature vector extracted from frame f i .These feature vectors encode the spatial characteristics and patterns observed in the video frames.They encapsulate information related to the distribution of edges, textures, shapes, and object arrangements within each frame.

Temporal Modeling (GRUs):
In this stage, we delve into the modeling of nuanced temporal dynamics within atomic actions using Gated Recurrent Units (GRUs).This is a pivotal step, as atomic actions often involve subtle and temporally precise movements that necessitate sophisticated modeling.
The input to this stage is the sequence of feature vectors X = {x 1 , x 2 , . . ., x n f }, which were extracted by the CNNs during the "Visual Feature Extraction (CNN)" stage.Each feature vector x i encodes the spatial characteristics of the corresponding frame f i .
Our objective is to model the temporal evolution of these feature vectors, capturing how atomic actions unfold over time.To achieve this, we employ GRUs, which are recurrent neural networks well-suited for handling sequential data.
The key operations of GRUs can be described as follows: For each frame f i , the GRU computes the hidden state h i using the input feature vector x i and the previous hidden state h i−1 .This operation is expressed as: Here, h i represents the hidden state at time step i, and h i−1 is the hidden state from the previous time step.This recurrent operation enables the network to capture the temporal dependencies within the sequence of feature vectors.
The output of this stage is a sequence Y = {y 1 , y 2 , . . ., y n f }, where y i represents the output at time step i.These outputs y i encapsulate the temporal dynamics of the input feature vectors.Each y i can be thought of as a rich representation that encodes how the features within atomic actions evolve over time.
In essence, the "Temporal Modeling (GRUs)" stage transforms spatial information into a temporal representation, allowing us to capture the temporal dynamics inherent in a sequence of atomic actions.The output sequence Y serves as a valuable foundation for the subsequent stages of our end-to-end approach, enabling us to generate comprehensive video descriptions.

Description Generation (Stacked LSTMs):
With the temporal modeling stage completed, we now focus on generating descriptive annotations for the video frames.This phase bridges the gap between visual data and textual descriptions.
The input to this stage is the sequence of output vectors Y = {y 1 , y 2 , . . ., y n f } from the GRUs.However, instead of generating a single sentence, we employ a hierarchical approach to create multi-level descriptions.
Our hierarchical description generation system consists of k levels of Long Short-Term Memory (LSTM) layers, each with two layers as mentioned.This hierarchy enables us to produce descriptions at various levels of detail.
Each level of the hierarchy corresponds to a specific level of detail in the descriptions.We train one LSTM for each level, often leaving out levels that do not result in meaningful concatenation due to the nature of the atomic actions.
Formally, the distribution over the output sequence (W ) given the input sequence (F ) can still be defined as P (W |F ), where: Here, p(w t |w t−1 , h t ) represents the distribution over the words in the vocabulary for predicting the next word w t given the previous word w t−1 and the corresponding hidden state h t of the LSTM at the current hierarchical level.
During training in the decoding stage, our objective is to estimate the model parameters to maximize the likelihood of the predicted output sentence given the hidden representation of the visual frame sequence and the previous words it has processed.
This hierarchical approach allows us to provide multi-level descriptions, ranging from detailed atomic actions to more concise and contextually organized narratives, enhancing the expressiveness and adaptability of our description generation system.
In summary, our end-to-end approach integrates VGG-based CNNs for robust visual feature extraction, GRUs for modeling intricate temporal dependencies, and stacked LSTMs for generating descriptive annotations.This multi-stage process enhances the precision and efficiency of our automatic annotation generation system, enabling it to identify subtle, temporally precise, and contextually significant atomic actions within video sequences.
Figure 3 depicts our end-to-end model at the bottom, as well as shows its relation to the hybrid approach.
6 Evaluation Metrics and Methods for Comparison

Metrics
Evaluation of video descriptions is a challenging task as there is no specific ground truth or "right answer", that can be taken as a reference for benchmarking accuracy.A video can be correctly described in a wide variety of sentences, that may differ not only syntactically but also in terms of semantic content [1].In this work, we report the results from two points of view: (1) applying automatic measures which have a high correlation with human judgement, such as "Bilingual Evaluation Understudy (BLEU ) [25]", while BLEU@N (N=1 to 4) metric is used to calculate the matched N-grams between machine-generated and reference sentences, "Metric for Evaluation of Translation with Explicit Ordering (M ET EOR) [2]" and "Consensus based Image Description Evaluation (CIDEr) [32]", (2) using human evaluation which is divided into three measurements: (I) grammatical correctness, (II) semantic correctness and (III) the relevance of the produced video description compared to what is present in the video.
These metrics are widely acknowledged for assessing the quality of generated textual descriptions in terms of their linguistic accuracy, relevance, and overall fluency.

State of the Art Methods used for Comparison and Data Sets
Here, we introduce several cutting-edge video annotation methods, with a specific focus on those designed for annotating manipulation actions.These methods will serve as key points of reference for a comparative analysis with our proposed approaches.We categorize these comparisons based on different aspects to provide a comprehensive evaluation of our method.Note, that in the Results section we have analysed the different methods in comparison to our approaches using the here (below) mentioned data sets.In addition to this we have analysed all methods using the KIT Bimanual Action Data Set.

Multi-level Descriptions
[13] presents an innovative framework, "Action Genome," for video understanding and description.This method focuses on actions within videos and delves into the intricate details of actions and their representations.It represents actions as compositions of spatio-temporal scene graphs, allowing for rich and structured descriptions of human activities.Our approach shares a common emphasis with "Action Genome" on capturing nuanced actions within videos, albeit with different methodologies.While "Action Genome" emphasizes scene graph-based representations, our approach employs a hierarchical system for generating multi-level descriptions.This allows us to capture fine-grained details within manipulation actions.
[17] presents a hierarchical approach for generating descriptive image paragraphs.This method allows for multi-level descriptions, which aligns with our goal of generating multi-level descriptions for manipulation actions.Similar to our approach, it focuses on capturing fine-grained details within images and organizing them into coherent narratives.
Dataset: YouCookII [41] contains cooking videos and is suitable for evaluating multilevel descriptions of manipulation actions.It allows us to assess how well your method can generate descriptions that capture actions at different levels of granularity within cooking tasks.[20] proposes a framework that includes two LSTM layers for encoding and decoding visual features and enhancing generated descriptions.This architecture bears resemblance to our approach, which also utilizes LSTM layers for both encoding and decoding stages.Additionally, both methods employ a module for word selection, with similarities to our word lattice module.This comparison highlights the commonality in utilizing LSTM-based encoding and decoding mechanisms.

LSTM-Based Encoding and Decoding
Dataset: MSR-VTT [37] is a diverse dataset that includes video descriptions.It can be used to evaluate LSTM-based encoding and decoding methods for video annotation. .

Semantic Summarization
[30] introduces an approach for summarizing visual information using a boundarybased key frames selection method.This aligns with our semantic approach, where we summarize visual information using semantic representations (SRes) before translating them into human language sentences.Both methods focus on the effective summarization of visual content, albeit through different techniques.
Dataset: ActivityNet Captions [18] provides video segments with descriptions and can be used for evaluating semantic summarization.This dataset allows you to assess how well your method summarizes visual information into semantic representations and translates them into human language sentences.

Robotic Manipulation Actions
[21] is one of the few video description papers that specifically focus on manipulation actions.It translates videos into commands applicable for robotic manipulation using deep recurrent neural networks.Similar to our work, it concentrates on the description of manipulation actions, making it a relevant comparison.This emphasizes the shared focus on annotating videos containing manipulation actions.
Dataset: EPIC-Kitchens [5] includes kitchen-focused videos and is suitable for evaluating methods related to robotic manipulation actions.It captures real-world cooking and manipulation actions in a kitchen environment, making it a good choice for assessing the annotation of such actions.

All Methods checked with one Additional Dataset
In tandem with the previously mentioned datasets, we have incorporated the KIT Bimanual Action Dataset [6] as a crucial 3D dataset in our study.This inclusion serves to enhance the comprehensiveness of our research, with a specific focus on manipulation actions.All methods mentioned above have been analysed using this data set.

Results and Discussion
This section presents a comparative analysis of our proposed methods, the hybrid statistical method and the end-to-end method, against state-of-the-art approaches on related datasets.

Multi-level Descriptions
The evaluation of multi-level description generation capabilities was conducted on both, the curated data [41] and the full set of manipulation actions of YouCookII.
The subset comprised approximately 20% of the total dataset, specifically chosen to include a diverse range of cooking activities and ingredient complexities, providing a representative sample of common kitchen tasks.This subset selection aims to simulate real-world scenarios where data may be limited but varied.
We juxtaposed the performance of our Hybrid Statistical Method (Method 1) and our End-to-End Method (Method 2) against the methods detailed by Ji et al. [13] and Krause et al. [17], employing BLEU, METEOR, and CIDEr as our evaluation metrics.
Table 1: Comparison of multi-level description generation performance using BLEU, METEOR, and CIDEr metrics on both a representative subset and the full set of manipulation actions in the YouCookII dataset [41].

Method
Subset dataset Full YouCookII dataset BLEU-4 METEOR CIDEr BLEU-4 METEOR CIDEr Ji et al. [13] 0 The results presented in Table 1 confirm the strengths of our Hybrid Statistical Method (Method 1) in scenarios with constrained data, underscoring its efficacy where data may be limited.This mirrors the few-shot learning achievements demonstrated by Ji et al. [13], where actions are decomposed into spatio-temporal scene graphs to encode activities hierarchically.However, our method refines this approach by capturing action-object interactions with greater granularity, significantly improving recognition accuracy in limited-data contexts.
By contrast, our End-to-End Method (Method 2) demonstrates its superior scalability with the expanded data set of the full subset of manipulation action in YouCookII dataset, achieving higher scores across all metrics.While Ji et al. [13] lay the groundwork for hierarchical representation, our end-to-end approach capitalizes on this by incorporating a comprehensive understanding of atomic actions and temporal dynamics, which is crucial for the nuanced generation of video descriptions.Furthermore, Krause et al. [17] explore narrative generation through dense captioning of image regions, which our approach advances by translating into the video domain, thus managing to weave a coherent story over the sequence of frames.By doing so, our method maintains the intricate details and the broader context of the visual narrative, a challenge not fully addressed by the static image focus of Krause et al.The seamless integration of visual perception with linguistic output in our End-to-End Method (Method 2) distinguishes it from the multi-stage processes typical of previous works.This streamlined pipeline preserves semantic integrity and ensures continuity in the narrative flow, resulting in video captions that are not only contextually accurate but also rich in detail.

LSTM-Based Encoding and Decoding
To evaluate the LSTM-based encoding and decoding strategies for video captioning, we focused on a manipulation-specific subset of the MSR-VTT dataset [37].This subset was carefully curated to include a variety of interaction-intensive videos, offering a rigorous testing ground for our methods.Further, we extracted a 20% segment of this manipulation-focused subset to assess the performance of our Hybrid Statistical Method (Method 1) under the constraints of limited data availability.This smaller sample reflects real-world scenarios where comprehensive datasets may not be readily accessible.In these conditions, our Hybrid Statistical Method demonstrated a notable advantage over our End-to-End Method (Method 2), underscoring its suitability for contexts with scarce data resources (see Table 2).
Our comparative analysis also included the Boosted and Parallel LSTM (BP-LSTM) architecture proposed by Nabati et al. [20].While their architecture innovatively employs a boosted and parallel approach to enhance the LSTM's video captioning capabilities, our End-to-End Method leverages a more integrated processing pipeline.This cohesive approach facilitates a direct flow of information from the video content to the generated captions, optimizing the use of semantic cues and temporal dynamics.This mechanism allows for a nuanced discernment of relevant features across video frames, leading to captions that are not only syntactically and semantically coherent but also contextually richer and more descriptive.The results on the full manipulation subset of the MSR-VTT dataset indicate that our End-to-End Method outperforms the BP-LSTM, especially in generating accurate and detailed descriptions for complex manipulation actions.
The comparative evaluation presented in Table 2 illuminates the strengths of our approach in relation to the BP-LSTM architecture proposed by Nabati et al. [20].Their architecture introduces an ensemble of LSTM layers augmented with a boosting algorithm, representing a notable development in video captioning.Their method employs multiple LSTMs in a parallel configuration to encode and decode video data, aiming to enhance the precision of the generated textual descriptions.
In our work, we adopt a different perspective by simplifying the captioning process with an End-to-End Method.This approach may appear modest in comparison to the parallel and boosted networks utilized by Nabati et al., but it aligns closely with the inherent sequential processing strengths of LSTM networks.By refining the encoding scheme to better capture the temporal flow of video content, we reduce redundancy and focus the LSTM on the critical task of generating contextually rich captions.Our architecture's strength lies in its ability to distill and utilize salient video features effectively, thereby enhancing the LSTM's capacity for predictive accuracy in caption generation.It is particularly adept at handling the intricacies of manipulation actions, which are essential for producing coherent and detailed narratives that resonate with the unfolding events in the videos.
When applied to the expansive dataset of MSR-VTT, our method consistently performs well, demonstrating its capability to scale and maintain performance without the need for complex, multi-stage frameworks.

Semantic Summarization
Semantic summarization of video content is pivotal for generating concise yet comprehensive descriptions.Our evaluation extends to the ActivityNet Captions dataset [18], focusing on a subset of manipulation actions and approximately 20% of this subset to reflect limited-data scenarios.
Our comparison encompassed our Hybrid Statistical Method (Method 1) and our End-to-End Method (Method 2) against the boundary-based keyframes selection approach by Singh et al. [30].Their approach efficiently reduces computational costs by encoding visual information through selected keyframes, but this can lead to overlooking the temporal relationships essential for fully understanding the sequence of events within a video.
By contrast, our End-to-End Method maintains continuity of the video narrative, capturing the essence of actions and contexts without the information loss that may accompany keyframe reduction.Our Hybrid Statistical Method, tailored for sparse data environments, generalizes from limited examples to yield accurate summaries.
The results, presented in Table 3, demonstrate that our methods surpass the keyframebased summarization, especially in scenarios where maintaining narrative flow and temporal dynamics is critical for generating precise video descriptions.This is due to the comprehensive analysis of video content that our methods undertake, ensuring no vital information is missed during the summarization process.

Robotic Manipulation Actions
Robotic manipulation commands derived from video data require a nuanced understanding of sequential actions and context.Our analysis leveraged the EPIC-Kitchens dataset [5], scrutinizing a comprehensive set of manipulation actions and a representative 20 Nguyen et al. [21] introduced a method employing RNNs to translate video sequences into robotic commands.While their method capitalizes on advanced feature extraction and a two-layer RNN for improved accuracy, it may not fully capture the subtleties of manipulation actions within a kitchen environment.
Our Hybrid Statistical Method (Method 1) excels in scenarios with sparse data, effectively capturing relevant features and translating them into accurate commands.Conversely, our End-to-End Method (Method 2) demonstrates its superiority in a dataabundant environment by seamlessly integrating video frame analysis and command generation, thus preserving the contextual flow of actions.
The results, as summarized in Table 4, suggest that our End-to-End Method outperforms Nguyen et al.'s approach when ample data is available, due to its comprehensive processing of sequential video frames, which is crucial for generating contextually rich commands.The Hybrid Statistical Method, on the other hand, showcases its robustness and adaptability in limited-data scenarios.In summary, our End-to-End Method's integrated approach to processing and translating video data into manipulation commands accounts for its high performance on the full dataset.It manages to maintain the integrity and continuity of the video content, translating it into commands that are not just accurate in terms of individual actions, but also in their temporal and contextual relevance.This capability is particularly ben-eficial for robotic manipulation tasks that require an understanding of the sequence and context of actions.Our Hybrid Statistical Method's performance in the limited-data subset further reinforces its potential for rapid deployment in situations where data collection is challenging, yet a high degree of accuracy is required.

Qualitative Analysis on KIT Actions Dataset
A comprehensive qualitative analysis was performed to assess the descriptive capabilities of our methods compared to the state-of-the-art methods mentioned earlier.The KIT Actions Dataset was chosen for this analysis due to its size, which made a detailed qualitative evaluation by human raters feasible.Ten evaluators, comprised of bachelor and master students aged between 18 to 29 and well-acquainted with the task, participated in the assessment.Each evaluator was briefed about the objectives of the study to ensure an informed and unbiased rating process.
Participants observed 540 video recordings from the KIT Actions Dataset, along with the corresponding descriptive sentences generated by each method.They were asked to score the outputs based on three criteria: "grammatical correctness," "semantic correctness," and "relevance," with up to 5 points awarded for each category.These criteria were selected to cover the essential aspects of natural language descriptions that contribute to the overall comprehensibility and utility of the generated text.
Table 5 summarizes the average scores from the raters for each method across the three evaluation dimensions.Each entry in the table provides a score out of 5 for three categories: Grammatical Correctness, Semantic Correctness, and Relevance.

Method
Grammatical Correctness Semantic Correctness Relevance Ji et al. [13] 3.2 3.1 3.0 Krause et al. [17] 3.0 3.2 3.1 Nabati et al. [20] 3.5 3.4 3.3 Singh et al. [30] 3.6 3.7 3.5 Nguyen et al. [21] 3.8 3.9 3.7 Our Method 1 (Hybrid) 4.1 4.2 3.9 Our Method 2 (End-to-End) 4.3 4.5 4.1 The scores were assigned based on human judgment and expert analysis of the output captions.A higher score in each category reflects a closer approximation to human-like performance.For instance, a score of 4.5 in Grammatical Correctness would imply very few grammatical errors, whereas a score of 4.7 in Semantic Correctness would suggest that the generated descriptions closely match the semantic content of the actions being performed.
The results provide insights into the practical effectiveness of each method in generating descriptions that are not only grammatically sound but also semantically accurate and contextually relevant.
Our Methods 1 (Hybrid) and 2 (End-to-End) have been compared against existing approaches to highlight their relative performance.In particular, we observe that Our Method 2 (End-to-End) typically achieves higher scores in the Relevance category, indicating its superior ability to focus on the most crucial elements of the video content.On the other hand, Our Method 1 (Hybrid) shows a strong performance in Semantic Correctness, suggesting that it can generate meaningful captions even with limited data.
These scores further substantiate the adaptability and effectiveness of our proposed methods, providing qualitative insights that align with the quantitative metrics reported earlier.They reflect the balance between producing grammatically sound sentences, capturing the essence of the visual content semantically, and maintaining relevance to the core actions depicted in the videos.

Conclusion
In this paper, we proposed two methods for producing multiple sentence descriptions of complex manipulation actions, which is a class of actions very common for humans but also prevalent in human-robot interaction.Our central focus had been on the generation of a hierarchically ordered set of different annotations: from detailed and complex to condensed and simple which was achieved by using stacked LSTMs.The problem of generating multiple sentences was studied before e.g.Senina et al. [28] modeled intra-sentence consistency by considering the probability of occurrence between pairs of objects and actions in consecutive sentences.Zhang et al. [40] proposed a method for multi sentence generation by modeling the relational information between objects and events in the video using a graph-based neural network.More recently, Rohrbach et al. [26] presented an adversarial inference for multi sentence video description.However, the primary emphasis of existing methods has been on describing the sequence of complex actions executed by one or more subjects and they do not pay attention to the constituent sub-actions that make up each action.Different from this, our work is the first that constructs a hierarchy of action descriptions building on the concept of atomic actions, which are in a video directly recognizable by conventional computer vision methods.This way we can produce descriptive sentences at different levels of granularity.Even transformers may not be the ideal choice to this end as they depend on substantial labeled data and extensive pre-processing procedures.Additionally, they may encounter difficulties in capturing the spatial and temporal relationships in videos.Our joint embedding space approach, on the other hand, employs pre-trained visual and linguistic features to match inputs directly, facilitating efficient generation of multi-sentence descriptions for complex manipulation action videos without extensive pre-processing or large amounts of labeled data.
Quantitative analysis has demonstrated our methods are comparable to the state of the art.Additionally, human raters have confirmed the resulting descriptions as understandable and generally appropriate.In conclusion, we would argue that hierarchical action description should offer additional functionalities allowing users to adopt the descriptive depth to their individual needs.

Comparison of Hybrid Statistical with End-to-End Method: Strengths and Limitations
When selecting an approach for video description generation, it is crucial to consider the strengths and limitations of both the Hybrid Statistical and End-to-End methods.This comparison aims to provide insights for informed decision-making.Here we summarize the aspects of different methods and discuss their pros and cons.In the Appendix we provide an itemized version of this to allow for an easier side-by-side comparison.
The Hybrid Statistical Method utilizes a combination of handcrafted features, rulebased modeling, and linguistic knowledge to create interpretable descriptions of actions.Its strength lies in providing multi-level descriptions that are not only clear but also based on well-established rules, making it particularly effective for complex actions that follow known patterns.This method, however, is not without its limitations.It relies heavily on manual engineering and linguistic input, which can make it less adaptable to new, diverse actions that are not covered by the predefined rules.While there are strategies to handle unseen data, such adaptations often require manual intervention, which may not be ideal in rapidly changing environments.
On the other hand, the End-to-End Method leverages the power of Convolutional Neural Networks (CNNs) for feature extraction, Gated Recurrent Units (GRUs) for temporal modeling, and Long Short-Term Memory networks (LSTMs) for generating descriptions.This method is, thus, fundamentally data-driven, learning directly from very large datasets, which allows it to generalize across a wide range of tasks and actions.It is scalable and capable of handling unseen data with more ease than the hybrid approach, especially when employing strategies like data augmentation and transfer learning to enhance its adaptability.The end-to-end method is suitable for applications where the availability of large amounts of training data and the need for diverse action handling are paramount, offering automated, data-driven solutions that minimize the need for manual tweaking.
When deciding between the two, the Hybrid Statistical Method is the go-to for scenarios that require highly interpretable, multi-level descriptions for well-defined actions.It's ideal when dealing with complex actions with known patterns and where interpretability is crucial.Conversely, the End-to-End Method is preferred in situations with abundant training data, a need for handling a diversity of actions, and a preference for automated, data-driven solutions.It excels in adaptability, making it capable of handling new actions more effectively.In some cases, combining both methods may offer the best of both worlds, with the hybrid approach tackling complex actions and the end-to-end method providing automation for more common tasks.The smallest bounding (or enclosing) box for a point set (S) in N dimensions is the box with the smallest measure (area, volume, or hyper-volume in higher dimensions) within which all the points lie.To provide the required object representation accuracy in this paper, we use the "3D convex hull" model which is the smallest convex set that includes the object.We constructed this model according to the "gift wrapping algorithm" [31].

Spatial reasoning using convex hulls 1.2.1 Basic definitions
Here, we propose a model based on space division, including interior, boundary, and exterior points for defining spatial relations.To this end, the spatial topological relations of any two 3D convex hulls can be described in detail by the intersection of their point sets.Suppose α is an object whose point cloud is obtained from a depth camera.We present a formal definition of the boundary, interior, and exterior of α as follow: Definition 1.The boundary of a point cloud α, denoted by δα, is the convex hull of α.Definition 2. The interior of a point cloud α, denoted by α 0 , is the interior of δα, (δα ⊓ α 0 = ∅).Definition 3. The exterior of a point cloud α, denoted by α − , is the exterior of δα.α − does not contain any point of α, (α ⊓ α − = ∅).Definition 4. The closure of a point cloud α is combining the boundary and the interior of the point cloud, denoted by α + , (i.e., α + = α 0 ⊔ δα).
Accordingly, the 3D-space is completely divided into three parts α 0 , δα, and α − , i.e., (α 0 ⊔δα⊔α − = R 3 ).Therefore, the spatial topological relations between two point clouds, namely α and β, can be represented as the relations between their corresponding interior, boundary, and exterior points and the whole space around is partitioned into six parts (Fig. 5): 1. the parts of α located in the interior of δβ, denoted by α ⊓ β 0 ; 2. the parts of α located on δβ, denoted by α ⊓ δβ; 3. the parts of α located at the exterior of δβ, denoted by α ⊓ β − ; 4. the parts of β located in the interior of δα, denoted by α 0 ⊓ β; 5. the parts of β located on δα, denoted by δα ⊓ β; 6. the parts of β located at the exterior of δα, denoted by α − ⊓ β; Figure 5: At the intersection of the two convex hulls, the space is divided into 6 separate sections (we show here two 2D convex hulls for simplicity, but this space partitioning is also valid for 3D convex hulls) Thus, the spatial topological relation from α to β can be represented as a matrix Rel(α, β) : Using this matrix, we can define all spatial topological relations from α to β.

Computations
Spatial relations are abstract relationships between entities in space.A correct understanding of object-wise spatial relations for a given action is essential, for example for a robot to perform an action successfully.Earlier, spatial relations between objects have been divided into "static" and "dynamic" spatial relations (SSR, DSR) [42]."Static" refers to the relations between the location of objects in space, while "dynamic" describes the kinetic relationships between objects.A static relation, for example, would be "Above" a dynamic one, f.e."moving together": Originally there were 8 static and 6 dynamic relations, which we have now extended (for the complete list as well as a sample of static spatial relations please see Fig. 4 in the main paper).
To accurately determine these spatial relations, using convex hulls, we need to compute the matrix Rel for each static relation as explained in equation 2. But before computing Rel, we need to assess whether two convex hulls are touching or not touching, because physical contact influences the actual spatial relation.Therefore, we define T ouch(α, β) = 1 if the convex hulls touch each other and T ouch(α, β) = 0 if not.As our convex hulls are in 3D, their touching area is empty (when they are disjoint) or a plane (when they are touching).A sample of two 3D convex hulls and their touching plane is shown in Fig. 6.As illustrated, for computing the T ouch we need to know if two sets of points (interior, boundary or exterior) intersect or not.This is done in two steps to speed up computation.First we create the axis-oriented bounding box (AABB) of each object.If a point is located in the convex hull of an object, it must also be located in the AABB of that object.It is easy and fast to evaluate this for all points.If a point is not in the AABB, then this point does not have to be considered any further.For all other points we need to determine whether they are in the convex hull.As mentioned earlier, we obtain the convex hull around of each object by using the gift wrapping algorithm [31].This algorithm creates a three-dimensional polyhedron (α), each side of which is a plane.For each of these planes, we check whether the target point lies to the "left" of that plane.When performing this, we treat the planes as vectors pointing counter-clockwise around the convex hull.If the target point is to the "left" of all of these plane vectors, then it is contained in the polyhedron (α 0 ); if the target point is on one of the side planes, it is considered as a polyhedron boundary point (δα); otherwise, it lies outside the polyhedron (α − ).Suppose the equation of plane N is: aX + bY + cZ + d = 0 and [x, y, z] are the coordinates of point P , then: This way we have determined T ouch.We now formulate how to compute each of the static spatial relations defined in the main text Fig. 4: Since the maximum and minimum coordinates (x, y, and z) of the convex hulls are the same as the maximum and minimum coordinates of the objects, it is easy to check conditions such as those in equation 5.
"Left" (L) and "Right" (R) relations as well as "Front" (F) and "Back" (Ba) relations between two convex hulls are also defined in a manner similar to the method of equation 5.These four relations are merged into "Around" (Ar) which is converted to "Around with touch" (ArT) if the convex hulls touch each other according to equation 3.
Moreover, to compute "Cross" (Cr), "Within" (Wi), "Partial within" (Pwi), "Contain" (Co) and "Partial contain" (Pco) between two objects, first they are modeled in term of convex hulls and then their Rel matrix (equation 2) is computed. Cross: Within: Partial within: Contain: Partial contain: In all of the above equations, "∅" and "¬∅" represent the empty and non-empty sets, respectively while " * " sign indicates an arbitrary value.

Decomposition of Complex Manipulation Actions into Atomic Actions
In section 5.1 of the main paper, we introduced concept of atomic actions as a cornerstone of complex manipulation actions.Each atomic action is represented using a quintuple including information about its subject, action type, object, spatial relation between subject and object as well as the place of occurrence.Action types have only 4 different, simple options which are "Touching (T)", "Untouching (U)", "Moving together (Mt)" and "Fixed moving together (Fmt)".Real actions like "cut", thus, consist of a chain of atomic actions.Hence, we need a framework that is able to decompose complex "real" manipulation actions into their constituent atomic actions which then provides an efficient mechanism for their representation and recognition.We use a Context-Free Grammar (CFG) for this purpose.A CFG is defined as G = ⟨S, N, T, P ⟩, where S is the start point, N is list of non-terminals and T contains list of terminals.Moreover, P includes list of production rules.In a CFG, every production rule is of the form: A → α, where A is a single non-terminal symbol, and α is a string of terminals and/or non-terminals.A formal grammar is considered "context free" when its production rules can be applied regardless of the context of a non-terminal.No matter which symbols surround it, the single non-terminal on the left-hand side can always be replaced by the right-hand side.
We can use this structure to map it to manipulation actions, because, similarly, here every complex action is built from smaller blocks, where each block is an atomic action or a sequence of those.Furthermore, a complex action also has a basic recursive structure, and can be decomposed into simpler actions.Hence, intuitively, the process of observing and interpreting manipulation actions, is syntactically similar to natural language understanding [38].Therefore, we borrow the structure of a Context-Free Grammar for the purpose of parsing complex manipulation actions to their constituent components, which also leads to an efficient way of action representation.Within this scope, "hand(s)", list of "atomic action types", "objects", "spatial relations" and "places" are defined as terminals.Moreover, "S p " or "sentence phrase", "O p " or "object phrase", "A p " or "action phrase" and "SR p " or "spatial relation phrase" are our non-terminals, which, with their reversible properties, create proper representations of the possible sequences of atomic actions.Another non-terminal is "Sub", which indicates the performer of the action, which can be a hand or a combination of the hand and the object being touched by the hand for a purpose (e.g. a tool).The latter is called merged entity (Me)."S" is the start point from which the construction of all action sequences begins.3 Results Here, we first report quantitative results from the implementation of the spatial reasoning algorithm based on the convex-hull modeling.Then we will review complex manipulation action recognition results obtained using the proposed context-free grammar.

Spatial reasoning
In order to evaluate the improvement that is caused by applying convex-hull modeling compared to AABB (Axis-Aligned bounding box) which has been used in previous works [42] for detecting correct spatial relations, we used the "KIT Bimanual Actions Dataset" , [6].This dataset includes 540 recordings, where each contains various objects with different spatial relations.These relations change during the progress of an action along the video frames.For example in a cutting scenario, in the beginning, the knife is located "around" (near) a fruit, then it touches its top, then it goes inside the fruit, etc.In this way, we consider every ten frames in each scenario and obtained the spatial relations between each pair of objects (approximately five pairs in each video clip) using both convex-hulls and AABBs.In this dataset, humans perform their movements at a normal speed, hence, this procedure is fast enough to recover all changes in spatial relations.The results indicated that among of 110500 cases, and compared to the ground truth provided in the dataset for the previously defined SSRs ( [42]) and the additional ground truth provided by us for the newly defined SSRs using convex-hulls, our object models including convex-hulls and AABBs act successfully in 100665 (91.1%) and 83427 (75.7%) cases, respectively.In addition to this, as a non negligible improvement, now also the relations such as Cr, Wi, Pwi, Co, Pco can be distinguished by convex-hull modeling in contrary to the AABB-based approach.The resulting accuracy is important with respect to two aspects, it is required for (1) the correct production of the semantic representations (SRes) as the basis of our hybrid statistical method, and (2) for the generation of more precise and human-like video descriptive sentences.

Complex manipulation actions recognition
The "KIT Bimanual Actions Dataset" [6] contains 540 recordings where 6 subjects performed 9 tasks (5 in a kitchen context, 4 in a workshop context) with 10 repetitions.In this dataset there are 221000 RGB-D image frames with 640 px * 480 px image resolution, which create videos with a total length of 2 hours and 18 minutes.The tasks are a combination of 14 sub-tasks where each of the sub-tasks can be an atomic action (e.g.lift) or a mixture of several atomic actions (e.g.cut or stir).Actions are fully labeled for both hands separately.Table 6 shows the list of manipulations in the dataset as well as the accuracy of our CFG based approach in their classification.The grammar used (equation 12) decomposes complex actions of each hand (left or right) into their constituent atomic actions and then based on the (1) sequence of identified "atomic actions" and (2) referring to the library of action mappings, the manipulation type is recognized.

Strengths:
-Interpretable, Rule-Based Descriptions: This method provides interpretable descriptions based on predefined rules.
-Multi-Level Descriptions: It supports multi-level descriptions, from atomic actions to comprehensive narratives.
-Effective for Known Patterns: Particularly effective for complex actions with wellunderstood patterns.
-Handling Unseen Data: Strategies can be employed to adapt to new, unseen actions, although manual intervention may be required.

Limitations:
-Manual Engineering: It relies on manual feature engineering and linguistic knowledge.
-Limited Adaptability: May struggle with diverse, unseen actions not covered by predefined rules.

End-to-End Method
Approach: The End-to-End Method employs Convolutional Neural Networks (CNNs) for feature extraction, Gated Recurrent Units (GRUs) for temporal modeling, and Long Short-Term Memory networks (LSTMs) for description generation.

Strengths:
-Data-Driven Learning: It is data-driven, learning directly from data without the need for manual engineering.
-Generalizability: Generalizes well to diverse actions and descriptions, making it suitable for a wide range of tasks.
-Scalability: Scalable and effective when handling large datasets.
-Handling Unseen Data: Capable of adapting to new actions or scenarios, provided they share similarities with actions seen during training.Strategies like data augmentation and transfer learning can enhance this capacity.

Choosing the Right Method
The choice between the Hybrid Statistical and End-to-End methods should align with specific application requirements: Use Hybrid Statistical Method when: -Well-defined actions and multi-level descriptions are needed.
-Interpretability is a critical requirement.
-Dealing with complex actions with known patterns.
-Handling Unseen Data: It can adapt to some extent with manual intervention.
Use End-to-End Method when: -Abundant training data is available.
-Diverse actions need to be handled effectively.
-Automated, data-driven solutions are preferred.
-Handling Unseen Data: It offers adaptability and can leverage strategies to handle new actions.
In some scenarios, a combination of both methods might be suitable, where the hybrid method handles complex actions, and the end-to-end method automates descriptions for more common or numerous actions.

APPENDIX 1 Spatial Reasoning 1 . 1
Object modeling by convex hull

Figure 6 :
Figure 6: (a) Two object point clouds, (b) the 3D convex hulls of the two point clouds and their touching plane

S
→ S p |S p .S p S p → Sub.A p |S p .Sub.A p Sub → Hand|M e A P → A.O P |A P .O P M e → Hand.O O P → O.SR P SR P → SR.P Hand → Hand L |Hand R |M e A → A 1 |A 2 |...|A 183 * O → O 1 |O 2 |...(list of objects) P → Ground|Air|O 1 |O 2 |...(list of objects) SR → Ab|Be|T o|Bo|Ar|ArT |Cr|W i|P wi|Co|P co (12)*: The total number of valid permutations between components of an atomic action is 183.Therefore, we will have 183 different atomic actions that all have been stored in a repository called library of action mappings.

Fig. 7
Fig.7contains a pars tree showing the decomposition of a manipulation -here push -to its constituent atomic actions using the above defined CFG rules.

Figure 7 :
Figure 7: Decomposition of a push action into its components according to the CFG rules (equation 12)

4
Itemized comparison of the different methods 4.1 Hybrid Statistical Method Approach: The Hybrid Statistical Method combines handcrafted features, rule-based modeling, and linguistic knowledge.

Table 2 :
[37]arison of LSTM-based encoding and decoding methods on both a representative subset and the full set of manipulation actions in the MSR-VTT dataset[37].

Table 3 :
Comparison of semantic summarization performance on the ActivityNet Captions dataset.

Table 4 :
Comparative analysis of translation methods for robotic manipulation actions using the EPIC-Kitchens dataset.

Table 5 :
Qualitative Analysis of Description Generation on the KIT Bimanual Actions Dataset (Scores Out of 5)

Table 6 :
[6]ssification accuracy according to the CFG based approach on the "KIT Bianual Action Dataset,[6]" Classifying manipulations at an acceptable percentage ( 95%) promises to produce more accurate video descriptions with less errors also outside the proposed framework.