Introduction

Currently, industries search to improve their production processes through the gradual elimination of activities that do not generate value to these processes which in turn affect the productivity and quality (Jadhav et al. 2014). Additional activities, produce operational cost and as a consequence overruns in the products, however, new aspects are being included to evaluate the productivity and efficiency, as for example the use of information technologies (TI) (Abri and Mahmoudzadeh 2014). Different types of extra operations are considered as inefficiencies inside the organizations. To overcome these problems in the past, a standard data system in the process planning was implemented to perform a great variety of operations with basic times. This procedure was called predetermined time system (PTS) and his pioneer was Frederick Winslow Taylor in 1881 (Aft 2000; Niebel and Freivalds 2003). The methodology has been accepted in industries because it is very practical and more than fifty different PTS established actually exist (Lee and Chan 2003). According to Meyers and Stewart (2002), time standards help to assess and measure the productivity; this is a key element that implies to manage information systems for process planning and improvement. There are different categories of PTS, methods-time-measurement (MTM) being the most relevant (Kanawaty 1992; Meyers and Stewart 2002). It was developed by Maynard in United States (Maynard et al. 1948). In the perspective of industrial engineering, MTM were presented as an alternative to implement PTS in the workplaces. MTM help characterize, simplify and improve the efficiency and effectiveness of manual tasks as mentioned by Meyers and Stewart (2002).

At the beginning of the 20th century, in 1911, the pioneer Frederick Winslow Taylor described that the application of time and motion studies in workers improves the productivity. Historically, the MTM has been applied in the manufacturing industry (Chatzis, 1999; Al-Saleh 2011), health procedures (Finkler et al. 1993; Harewood et al. 2008), among others (Gunasekaran et al. 2000; Hendrich et al. 2008; Maillck and Asjad 2014). In times established with PTS, the mental processes as well as physical motions play an important role in the labor activities of an operator. These aspects are taking into account another kind of PTS called modular arrangement of predetermined time standards (MODAPTS). For example, in MTM many physical motions like “move”, “get”, and “put” are included. But, in MODAPTS “vocalize” is a new element to consider it (Lee and Chan 2003). The basic time unit in the application of MODAPTS is a MOD, which is equivalent to 0.129 s. This methodology defines basic times in MOD when the labor activities are related to vocalize words (MODAPTS 2000).

A traditional way of recollection of information for MTM studies is based on the direct observation quantifying time. The observations are performed by expert staffs and the information is taken manually some times. Unfortunately, this technique is not effective to register activities of great complexity which are frequently seen in industries (Kanawaty 1992). One of the biggest challenges that an observer faces, is the ability to differentiate between macro and micro motions that are present in the manufacturing activities (Meyers and Stewart 2002). Micro and macro motions can be defined as a classification of motions delimited by time intervals which were studied initially by Frank and Lillian Gilbreth (Niebel and Freivalds 2003). Basically, they found the study of the body motions to improve the operations by elimination of unnecessary motions, simplifying the necessary, and then creating an advantageous motion sequence for obtaining a maximum efficiency (Niebel and Freivalds 2003). In some MTM, an observer must give valuable judgments about the analyzed activities. These judgments are focused on valuations of the work rhythms and the mean performance of the worker (Maynard et al. 1948). The information obtained from MTM can be used in ergonomic evaluations and calculations of the distribution of work tasks with the aim of improving efficiency. As an example, an application of MTM was done by Kuhn and Laurig (1990), which detected an irregular balance between the load on the right and left hand during the work. Some techniques used to implement time standards for the motions are; stopwatch time study, computerized data collection, standard data, PTS, work sampling, MOST (Maynard Operation Sequence Technique) etc. It is important to mention that methodologies as MOST, has been implemented in a software called ergoMOST and it is used as an ergonomic survey tool (H.B. Maynard and Company, Inc.) to analyze work sequences.

In this context, the automated motion analysis can be an alternative to the imprecision of the human observations and repetitive analysis. Motion capture systems (MCS) permit to record human body movements, attaching markers (passive or active) or electronic devices on it. It means that cameras or different electronic devices obtain information on the motions of some body parts. MCS helps to describe the realistic human motions to simulate the locomotion in work tasks. The application of MCS is a wide research area studied in the last few years. Due to the fact that potential applications have been found, as discussed in the review done by Moeslund et al. (2006). According to Wang et al. (2003), many researchers have focused the applications in biomedical engineering (Cappozzo et al. 1995; Frigo et al. 1998; Balan et al. 2005; Tinoco and Arias-Moreno 2014), computer vision (Deutscher et al. 1999), real time animation (Hu et al. 2010), computer games, movie making, robotics (Fu and Krovi 2008), among others.

A virtual environment is synthesized in a group of geometrical entities that define an interactive space and it provides a representation of reality (Chen 1995). This definition concerns the activities performed by humans that it can be represented and translated to another environment where the information can be described and analyzed virtually. The virtual environments have been studied extensively to search real applications, but this topic has found great acceptation in computer graphics, as described in some studies performed by Noser et al. (1995), Lee et al. (2006) and Reitsma and Pollard (2007).

This study shows an approach to time and motion analysis of real workplaces combining a virtual environment and a motion capture system. The proposed methodology is based on manual assembly processes, in which the hands are recorded to track infrared (IR) passive markers located on them. The real workplace is virtually represented by a computer geometric discretization to create a virtual workplace with computer patches assembled with basic geometries. The hand motions of the operator are captured by means of a motion capture system and these movements are embedded to the virtual environment. Based on this approach, a time and motion analysis is carried out from a developed algorithm which shows the interaction in recorded motion data and the virtual environment as a simulation. This approach here presented was tested in an intentionally designed case study that was limited to a planar space.

Materials and methods

Tracking hands with motion capture in a workplace

In manufacture processes, the workplaces are designed according to the requirements of the assembly operations. If manufacture includes manual operations, a workplace must be delimited by the extensions of the hands as shown in Fig. 1 and it is defined by the economy principles of motions (Niebel and Freivalds 2003). So, the maximum manual operative space must be designed to locate operations in the proximities of them. To track the hands and to obtain its paths, a lot of devices and techniques have been developed, one of them is tracking passive (Tinoco and Arias-Moreno 2014) and active markers on the skin. This technique is very popular despite their limited precision, because human movements are considered difficult to track by its degree of freedom. However, the dimensional space is reduced to eliminate these drawbacks. Limitations of the technique to track markers on the skin are discussed by Corazza et al. (2006). As discussed in their study, in the moment of choosing any technique or methodology, the vantages and disadvantages must be evaluated to classify the applications. Therefore, for our purpose, factors such as acquisition costs, accessibility and versatility are taken into account in the moment to choose motion capture system. The analysis is based on review done by Moeslund et al. (2006) which discussed the aspects mentioned previously.

Fig. 1
figure 1

Scheme to track hands using a camera and passive markers

For this study, the motion capture system of hands is composed by an infrared (IR) camera, passive markers in both hands and a Tracker video analysis software (Brown 2009) which is an open source, as observed in Fig. 2a scheme. When in a defined workplace, a person is carrying out a labor operation, the IR camera registries the motions of the markers located in each hand. The video is analyzed with the software, in which it is possible to determine the positions of the markers of the projective plane \( x - y \) (real plane). The plane \( x - y \) must be a parallel plane (projective workplace) to the camera plane, however, these planes present small deviations that necessarily should be adjusted.

Fig. 2
figure 2

a Simulated workplace with the capture motion system. b Vision from IR camera. c Coordinate transformations

A complete scheme of the system is shown in Fig. 2a. The materials of the system shown in Fig. 2 are listed in Table 1. The IR camera (V100:R2 Optitrack) records the passive markers at the rate 100 frames/second in a common video format (it can be AVI extension, see Fig. 2b) with OptiTrack ARENA software for real time display. Posteriorly, the video is processed to obtain the displacements of the markers. The study of motion capture allows a maximum of 3–5 m in two dimensions.

Table 1 Materials for motion capture

To adjust the deviations of the markers positions, two steps of calibration must be carried out: scaled (see Fig. 2a) and coordinate transformations (see Fig. 2c). It is important to mention that the camera measurements are not defined in length units; therefore, the camera coordinates must be scaled. To scale the coordinates from the video, the pixels are converted to dimensional units or length units. Inside video scene, a calibration triangle (structure B) is used as observed in Fig. 2.

The calibration triangle helps to define the dimensional transformation to apply a correspondence between pixel/length by means of a scale parameter \( \alpha \). The scale is calculated as follows

$$ p_{i}^{\prime } = \alpha f_{pi},\quad \forall i = 1,2,3, \ldots ,n $$
(1)

where \( f_{pi} \left( {x_{mi} ,y_{mi} } \right),\forall i = 1,2,3, \ldots ,n \) are pixel absolute coordinate frames and \( n \) is the number of markers. Afterwards, a coordinate transformation must be carried out, since the system \( x - y \) is rotated to \( x^{\prime} - y^{\prime} \). Therefore, the following transformation is suggested

$$ p_{i} = Tp_{i}^{\prime },\quad \forall i = 1,2,3,..,n, $$
(2)

where \( p_{i} = \left[ {\begin{array}{*{20}c} {x_{i} } & {y_{i} } & {z_{0} } \\ \end{array} } \right]^{T} \) are the real positions, \( p_{i}^{\prime } = \left[ {\begin{array}{*{20}c} {x_{i}^{\prime } } & {y_{i}^{\prime } } & {z_{0} } \\ \end{array} } \right]^{T} \) are the scaled positions obtained with the camera, \( z_{0} \) is the distance of Desk C (see Fig. 2) until the camera plane and

$$ T = \left[ {\begin{array}{*{20}c} {\cos \theta } & {\sin \theta } & 0 \\ { - \sin \theta } & {\cos \theta } & 0 \\ 0 & 0 & 1 \\ \end{array} } \right] $$
(3)

where \( T \) is the rotation matrix in \( z \) and \( \theta \) is the rotation of the coordinate system. Detailed descriptions about coordinate transformations are discussed by Hartley and Zisserman (2003). With methodology exposed in this section, the hands movements are registered and transformed in real coordinates. To classify the motions within the workplace, which is constrained and discretized depending on zones that the hands use to perform operations and transport procedures.

Virtual environment for motion recognition

In this section, it will be shown how two dimensional virtual environments can be designed easily. The proposed virtual environments consist of small regions that represent real domains in a real workplace. As mentioned before, the information of the hand trajectories can be represented in a virtual environment analogous to a constrained workplace, with the aim to classify the operations done for them. In general way, a workplace can be divided in regions, such that the regions where the hands carry out some assembly procedures are defined as operation zones. But, if the hands are transporting something or it is going or transiting freely, these regions are defined as transition or transport zones. Each zone is considered as unique and it is distinguished by its geometric shape. For our case, three different shapes are proposed, the triangle, rectangle and circle as observed in Fig. 3. It is important to point out that the proposed virtual environment is limited to the two dimensional space. To visualize this purpose, let’s consider any virtual workplace defined by a geometrical discretization of the work domain as shown in Fig. 3a. In this figure, there are regions A, B, C and D called operation zones and E, F, G and H called transition or transport zones, as mentioned previously. Operation zones are defined as regions where operation procedures are carried out by hands. Transition or transport zones are defined as regions where the elements are taken and transported to the operation zones. Basically, the hand moves through these zones without taking into account if it is holding, waiting or transporting elements, respectively.

Fig. 3
figure 3

a Virtual environment created in MATLAB for transition (EFGH) domains and operation domains (ABCD), e.g. circular motion. b Example for the bimanual process chart of (a)

In Fig. 3a, the red markers represent the simulated trajectories of hands. From the simulated trajectories, it is seen that the position of the red marker goes across of transport and operation regions. When the marker crosses these zones, a detection of the marker can be done by means of logic states that must be established for each geometric domain of the virtual workplace. As for example in Fig. 3a, the circle (red), rectangle (blue), triangle (brown), square (yellow) are operation zones and E, F, G and H are transport zones. To collect the information in each time interval, the movements of each hand can be registered and listed in a table that is commonly used in economy of motions; this is named as bimanual process diagram. An example is shown in Fig. 3b. The bimanual process chart help to group the motions and to optimize the work method used, with the aim to find an optimal method as discussed in Niebel and Freivalds (2003) and Groover (2007). However, in our purpose, it is possible to observe two procedures, operation and transport instead of four procedures (operation, transporting, holding and waiting). This can be a limitation in analysis of motions; however, these conditions are not necessary since operation, holding and waiting are seen as one in this study. The simulated environment as illustrated in Fig. 3a was designed with the patch function of MATLAB programming language (Marchand and Holland 2003) as a particular example.

To design a 2D virtual environment, basic geometries are defined by means of mathematical representations known. In a mathematical sense, basic geometries as the circle, triangle and rectangle (the square is particular case) have properties that make them different to others. The domains that define the triangle, circle and rectangle are considered convex sets. But, a set strictly convex must satisfy the following requirements (see Kanehiro et al. 2008); let’s consider that \( O \) is a subset of \( {\mathbb{R}}^{2} \) closed. Such that \( O = \left\{ {x \in {\mathbb{R}}^{2} \left| {f(x)} \right.} \right\} \) and it is strictly convex if and only if, \( x_{1} \in O, \, x_{2} \in O \) and the points that belong to the line segment \( x^{*} = \lambda x_{1} + (1 - \lambda )x_{2} ,\lambda \in (0,1) \) satisfy that \( x^{*} \in O \). Then, it is concluded that \( O \) is a convex set. In Fig. 4 are shown the triangle, rectangle and circle which fulfill with the description of convex set, since all satisfy the convexity definition. Other geometries can be considered, including different convex polygons, nevertheless not all polygons are strictly convex. Detailed descriptions of convex sets can be found in Valentine (1964).

Fig. 4
figure 4

Convex sets used to create 2D virtual environment: a Triangle. b Rectangle. c Circle. d Orientation of a triangular geometry

The geometries shown in Fig. 4a, b and c are the basis for the construction of the virtual workplace. To detect active positions (see Fig. 3a) of the markers inside each set, the following mathematical descriptions must be explained. Each geometry is constructed counterclockwise such that the unitary normal vectors (\( n_{1} ,n_{2} ,n_{3} \) in Fig. 4d) point outside the domain. An example is shown in Fig. 4d for a triangular domain.

Let’s assume that convex functions of polygons or circles are represented by \( f(x,y) = 0 \), where values\( (x,y) \) are on the function. To determine that the coordinates of a point \( (x,y) \) are inside the domain delimited by \( f(x,y) \), an inequality is satisfied as follows, \( f(x,y) < 0 \). On the other hand, \( f(x,y) > 0 \) if \( (x,y) \) is outside of it. In Fig. 4a, b and c, the arguments mentioned anteriorly are described. Each geometric domain is delimited by a set of linear (polygons) and nonlinear (circle) inequalities that verify if a point is inside or outside the geometry. Therefore, this information can be differentiated in motion analysis. The mathematical descriptions of the geometries shown in Fig. 4a, b and c are established as follows:

  • Triangle \( {\mathbf{Ax}} \le 0 \):

    $$ \left[ {\begin{array}{*{20}c} {y_{2} - y_{1} } & {x_{1} - x_{2} } & {y_{1} x_{2} - x_{1} y_{2} } \\ {y_{3} - y_{2} } & {x_{2} - x_{3} } & {y_{2} x_{3} - x_{2} y_{3} } \\ {y_{1} - y_{3} } & {x_{3} - x_{1} } & {y_{3} x_{1} - x_{3} y_{1} } \\ \end{array} } \right]\left\{ {\begin{array}{*{20}c} x \\ y \\ 1 \\ \end{array} } \right\} \le 0 $$
    (4)
  • Rectangle \( {\mathbf{Ax}} \le 0 \):

    $$ \left[ {\begin{array}{*{20}c} {y_{2} - y_{1} } & {x_{1} - x_{2} } & {y_{1} x_{2} - x_{1} y_{2} } \\ {y_{3} - y_{2} } & {x_{2} - x_{3} } & {y_{2} x_{3} - x_{2} y_{3} } \\ {y_{4} - y_{3} } & {x_{3} - x_{4} } & {y_{3} x_{4} - x_{3} y_{4} } \\ {y_{1} - y_{4} } & {x_{4} - x_{1} } & {y_{4} x_{1} - x_{4} y_{1} } \\ \end{array} } \right]\left\{ {\begin{array}{*{20}c} x \\ y \\ 1 \\ \end{array} } \right\} \le 0 $$
    (5)
  • Circle \( f_{c} (x,y) \):

    $$ \left( {x - x_{c} } \right)^{2} + \left( {y - y_{c} } \right)^{2} - R^{2} \le 0 $$
    (6)

With Eqs. (4), (5) and (6), the constraints are defined for the design of subdomains with the aim to determine when the markers are inside or outside of these. The design of virtual workplace is an extrapolation of the domains defined in real space. Using any programming language, an algorithm can be developed to verify when the marker is located inside and outside of each subdomain. For our study, MATLAB as programming language is used (Marchand and Holland 2003). The uses of each basic geometry depend on the shapes that can be adapted to the real workplace.

Methodology for automated time and motion analysis

In this section, a methodology to analyze time and motions based on movement capture and extended to a virtual environment is proposed. A scheme of the methodology is shown in Fig. 5. A workplace consists in a real space delimited by a job area. In this area, there are disposed all the elements necessary to assembly a product including tools and assembly parts. Job area is divided in subdomains, such that the elements of the product are organized in specific zones or subareas. So, an operator interacts with workplace through the job template. The operator must to know the assembly method to perform the activities and the specified procedures of it that are designed previously.

Fig. 5
figure 5

Scheme of the proposed methodology

On the hands, IR passive markers are placed to recognize the movements when the operator is carrying out the assembly tasks. The scene is captured with IR video device that records the hand motions. Inside job scene, a system of calibration must be included to scale and transform the camera coordinates to real coordinates, as explained in Sect. 1. The workplace is discretized by zones where the assembly elements are located. These zones are reproduced by means of the strategy proposed in Sect. 2 that is based on use of basic geometries to construct the virtual environment. The details are described and discussed in Sect. 2.

The recorded video is processed with software (Brown 2009) which is an open source. With this software, the position and the velocity of the markers (placed on hands) are obtained which are used to perform the simulation of assembly done by the operator. Then, an algorithm is developed with the aim to identify where the hands cross and interact with the virtual zones designed. This can be visualized in a computer simulation.

In the scheme shown in Fig. 5, the final stage is the automated time and motion analysis, it includes simulations of the operations, zone frequency analysis, velocities frequency analysis and modified bimanual process diagram. The zones and velocity frequency analysis is referred to the number of times a position or a velocity magnitude finds in a transport or operation zone, for the velocity case, in an integer specific value. Therefore, we can obtain information about the activities done by the hands in each zone and also if the hands present motions to low or high velocities. To construct a bimanual process chart with the obtained information, a modification is necessary by the limitations of the proposed motion capture system. According to the theory of motion analysis, the operations must be classified in four types as described in Table 2. In this research, we consider that an area of transport is one that accomplishes the definition shown on it. However, the operation zones are all the areas that fulfil with the following requirements specified in Table 2, which are operation, waiting and holding.

Table 2 Fundamental operations of bimanual process chart (Kanawaty 1992)

Case study: assembling a toytractor

In this section, a case study is designed to implement the proposed methodology that was exposed in Sect. 2. For the case study, the chosen product is a toy tractor which will be assembled in a simulated workplace by an operator. To assemble the toy tractor and to design the workplace, it is necessary to take into account some criteria that are listed in Table 3. In Table 3, mentioned descriptions were analyzed and structured based on the report done by Kanawaty (1992).

Table 3 Criteria for the design of workplace

The toy tractor is assembled by a manual procedure, such that all operations are performed on a work-table as it is shown in Fig. 6. Distributions of the workplace are based on principles of motion economy which can be found in Kanawaty (1992), Meyers and Stewart (2002). These principles mention that the use of human body, the distribution of the workplace and a model of machines and tools must be considered as parameters of design. Intentionally, the designed distribution doest not satisfy all the requirements of economy principles since some work domains were located randomly (Niebel and Freivalds 2003). This was done with the aim to identify the intentional errors in the analysis.

Fig. 6
figure 6

Sketch of the workplace and parts of the toy tractor

The toy tractor consists in 10 unique parts, but 21 pieces individuals. The overview of the parts of the product are listed in Table 4 and are shown in Fig. 6. Each part of the toy tractor is related to a zone which can be operation or transit zones and these were delimited in work-table. The parts of the tractor are organized in the simulated workplace in which each part is located inside rectangles that divide the workplace. The space A is considered as an operation zone and B, C, D, E, F, G, H, I, J, K are transport zones.

Table 4 Parts of the toy tractor

The parts are assembled in six stages using a tool (H in Fig. 6) simply, and in each assembly stage, the assembly procedures are defined. In Fig. 7, the assembly schemes are organized in an ideogram with procedures step by step as well as necessary operations to carry out the assembly process.

Fig. 7
figure 7

Assembly ideogram

To assemble the toy tractor, a set of procedures are structured for each stage and these are described in the following way:

  • Operation 1: Chassis (A) and bodywork (B) are joined and fixed with bolts (E).

  • Operation 2: To the pre-assembled structure is shared the front axle (F) with one bolt.

  • Operation 3: The front tires (G) are assembled to the main structure with two bolts.

  • Operation 4: The back tires (K) are assembled to the main structure with two bolts.

  • Operation 5: The loader (D) is joined with two bolts.

  • Operation 6: the chair (J), the driver (I), the exhaust pipe (C) are assembled and the toy tractor is completed.

To assemble the toy tractor, we contemplate the training of the operator considering a training plan with established method of assembly. Training plan is based on the practice of method with a periodicity of 2 h per day. The mean time of assembly was measured in each training session, with ten replications by session. A learning curve was obtained for 20 h of training; the curve is shown in Fig. 8.

Fig. 8
figure 8

Learning curve for the manual assembly

It is observed that in the first sessions, the assembly initial time was 81.9 s and it finished in 76.9 s during 10 days of training. It is observed that there is a learning process of the chosen assembly method, since the assembly times were reduced with the training sessions. The training time achieved a stationary regime, it means that time variations were not evidenced after 20 training hours. In this case study, the curve is used to guarantee that the operator standardizes the assembly time, to simulate an experimented operator. Our procedure shows that the operator got ability in the assembly method. However, the main objective of the training is to avoid non programmed motions of the hands that affect the method, since the inexperience could perturb the collected information. More details about learning curves can be found in Nembhard and Uzumeri (2000).

Results and discussion

Implementing the methodology proposed in the Sect. 2 into the case study of the Sect. 3, determined the results of position and velocity of the hands. In Fig. 9, we can see the discretization of the real workplace in the designed virtual environment and the virtual subdomains are labeled with the names of the real subdomains such as shown in Fig. 6. The position trajectories obtained from the motion capture system are depicted on the virtual workplace. The operations were established with an assembly method, the method was practiced by the operator with a training plan. After the training plan, a lot of repetitions were performed to complete and improve the assembly time using an assembly method. Subsequently, a best assembly time of the toy tractor was 67.1 s, compared to the last registered time in Fig. 8, this register improves by 12.7 % with respect to 76.9 s.

Fig. 9
figure 9

Trajectories of the hands on the virtual environment, a left hand. b Right hand

The hands trajectories are highlighted in a red color, transport zones with gray color and operation zones with a green color. In the virtual workplace, it is visualized how each hand is moving over the entire workplace. Basically, it permits to determine the zones where the hands never crossed as well as where the hands passed by the same place. In this case study, it is observed that each hand finds the zone corresponding of motion, but the hands do not leave the regions delimited by the workplace. However, this argument does not mean that the procedures in the assembly method are correct. Since, it implies to measure and classify the operations performed by each hand.

In Fig. 9, it is observed that the hands perform different motions at all times of assembly. A great part of the time, the hands were inside the zone A, which is the operation zone. But, it may be a difficult task to know the significant differences and also it may be a relative way of analysis to determine these differences, from a simple perspective. Therefore, the algorithm designed permits to obtain how many times the hands were in each zone with the constraints defined. Really, Fig. 9 is an instrument of simulation to emulate the locomotion of the hand movements.

In Fig. 10, we can see how many times the hands were in each zone by means of a frequency analysis. The left hand was 657 time intervals in the zone A, and each time interval corresponds to 0.1 s; the right hand was in this zone 582 time intervals. The intervals correspond to 65.7 and 58.2 s for the left and right hand, respectively. These times represent a 97.75 and 86.60 % of the total time inside the operation zone A. As it can be analyzed, the left hand keeps inside operation zone at all times of the process. This indicates that the left hand must have the most activity in transport zones to reduce the time inside the operation zone. In the regions B, C and D, the left hand carried out transport tasks corresponding to 1.4 % of the total time. On the contrary case, in the regions F, G, H, I, J and K, the right hand completed transport tasks equivalent to 13.24 % of the total time. It is seen that there is a great difference between motions when the hands transit by transport zones. This means an opportunity to redistribute the workplace and the assembly method. Observing the region E, we observe that both hands were involved in the transport activities, the left hand did 15 % of the activities and another 85 %. The analysis shows that the transport activities violate some economy principles of motions mentioned by Niebel and Freivalds (2003). In which are the hands equilibrium and workload, since both hands should be working in the operation time.

Fig. 10
figure 10

Frequency in each virtual domain for time intervals of 0.1 s

A descriptive analysis was done from velocity magnitudes of each hand such that the integer values of the velocities obtained with the derivative of the position vector are considered only. A frequency analysis of velocity magnitudes is shown in Fig. 11a. The frequency analysis indicates the number of times in which the value of velocity is present in the measured motion data. We observe that starting from 20 mm/s, the right hand has more movements than the left until 240 mm/s. As a conclusion, we see that between 0 and 240, the mobility of the right hand is bigger. This can be seen clearly, if the percentages are calculated in accumulative form as it is presented in Fig. 11b. The accumulative percentage shows that the left, fulfil the 90 % of the values of velocity until 100 mm/s. This means that the left hand moves to low velocities, which are related to hold and wait, probably. But, from 100 mm/s, a 10 % of movements are completed. In contrary case, the right, fulfil the 60 % of velocities in a range from 0 to 100 mm/s. This means that the hand is moved a 40 % to velocities bigger than 100 mm/s. The differences in the velocities of hands reveal the errors in the chosen method. Since, in terms of motion economy, this must be improved as well as the workplace.

Fig. 11
figure 11

a Frequency of velocities in integer magnitude. b Accumulative percentage of velocity frequencies

With the results obtained by the proposed methodology (see Sect. 2), we see an opportunity to classify some quantifiable aspects of the manufacture process in which the procedures are carried out by hands, variables as position, velocity and frequency analysis may reveal indications of inefficient procedures in the different work zones. This permits to an analyst to carry out a rigorous study of the motions done by the hands in independent way to suggest improvements. The improvements must be focused on the simplification of motions searching a process efficiency, such as mentioned by Niebel and Freivalds (2003).

Conclusions

The application of the proposed methodology for time and motion analysis was validated with a case study and the results are shown in simulations. In the methodology, a system of motion capture is used to estimate 2D trajectories of the operator hands in their workplace. In our research, the virtual design of the workplace was done with a language programming, with the aim to visualize each job zone on it. In the simulation, the hands interacted with the virtual environment and it is observed where the hands never crossed as well as where the hands passed by the same place. From results of frequency analysis, we observed the activities done in each zone and some deficiencies are found, since these were designed intentionally in the distribution of workplace. With the velocity frequency analysis, the differences in the hand velocities reveal the errors in the chosen method, because the hand motions were not equilibrated in the design of the workplace. In this research an opportunity to classify some quantifiable aspects in manufacture processes in which the procedures are carried out by hands is seen. With the automated methodology of MTM, the following vantages could be feasible in an industrial context; such that registers can be kept constantly; continual improvement process on methods and workplaces; evaluation of the performance of the operator; among others. Future works will focus on the discrimination of motions, because in our study waiting, holding and operation are considered one activity, respectively.