1 Introduction

Traditional rehabilitation in children with unilateral cerebral palsy consists of performing a wide variety of activities and exercises aimed at improving the functionality of the affected side, and reduce and prevent secondary impairments, which may limit the movements. These activities should enhance the residual strength, balance, coordination, cooperation of both sides, and increase the use of the plegic limb during daily life activities. Usually, during rehab sessions, the therapist proposes tasks that require the handling of different objects, such toys (e.g. ball, bricks, puzzles, pencil, etc.), toy cutlery or utensils (e.g. cup, dish, fork, spoon, etc.), to improve the child’s abilities to perform daily life activities. At the same time, the therapist observes the child and collects feedbacks to adapt the exercises and to choose the most engaging activities. Regarding the assessments, the therapist may evaluate the rehabilitation process by means of validated and standardized clinical scales, questionnaires or checklists.

On the technological side, the exponentially increasing capabilities of computer-based systems made possible completely new forms of remote monitoring (Malasinghe et al. 2019). Despite the increasing availability and diffusion of small, smart and connected devices that are affecting our daily lives, these systems are not yet routinely used in rehabilitation settings. Rather, they are still mostly matter of research projects, investigating the potential role of environment (Jekel et al. 2016) or objects themselves (Jovanov et al. 2016) for personalized eHealth monitoring services (Pharow et al. 2009) assessments of Activities of Daily Living (ADL) (Sacco et al. 2012; Prioleau et al. 2017) or their classification (Ugolotti et al. 2013).

AAL and IoT healthcare related researches as well seems to be more focused on the assessment of the health conditions of the people under monitoring, their behavior (Eisa and Moreira 2017), the adherence to medical treatments (Roy et al. 2017), patients’ empowerment toward a more conscious healthy lifestyle (Grace et al. 2017), or improving their self-sufficiency (Poncela et al. 2019). Moreover, most of the studies reported in literature are targeted to the adult and elderly population (Azimi et al. 2017), while very few studies foster the effects of smart environments or smart objects on children. A recent PubMed searchFootnote 1 returned only one publication relevant in the area of pediatric rehabilitation (PR), describing a study where a small smart device has been attached to objects transforming them into computer simple mouse controllers (Szturm et al. 2014).

There are yet several studies about the use of advanced technology in pediatric rehabilitation, but they often focus on the assessments of kinematic measures collected during interactions within virtual environments. In particular, there is a growing scientific literature regarding the use of Microsoft Kinect (Da Gama et al. 2015; Dimaguila et al. 2018) or Leap Motion (Fotopoulos et al. 2018; Nicola and Stoicu-Tivadar 2018; Niechwiej-Szwedo et al. 2018) devices, but the lack of interaction between the child and physical objects is still a subject of investigations. As an example, some recent studies [i.e. on Duchenne Muscular Dystrophy (Massetti et al. 2018)], highlight the limited or insignificant transfer of abilities learned in a virtual environment to the real one. In addition, a recent work reports the importance of an ecologically grounded approach in the rehabilitation process (Vaz et al. 2017), to effectively affect subjects’ daily life. Other studies are taking into consideration activities performed in semi-virtual conditions, by holding physical handles (i.e. Nintendo Wii Mote, Sony PlayStation Move), but again the results suggest to be cautious when considering the transfer of the acquired competencies in real world daily activities (Ye et al. 2018). Finally, as we discussed in a recent paper (Olivieri et al. 2018), despite their popular use, commercial videogame platforms show a number of limitations when they are applied to PR. At least, as in (Malone et al. 2016), difficulties in playing the game due to the child’s deficits (i.e. handling properly the controllers or moving consistently with the game’s directives) lead to less engagement and motivation, which instead are fundamentals given the repetitive nature of PR activities, as “repetitive practice is now an established and pervasive rehabilitation principle” (Damiano 2009).

The great value in handling real (smart) objects in PR is of course based on the primary role of hands in child’s development, as brilliantly explained by Dr. Montessori: Movement of the hand is essential. Little children revealed that the development of the mind is stimulated by the movement of the hands. The hand is the instrument of the intelligence. The child needs to manipulate objects and to gain experience by touching and handling. There is an age when children play; we call it the age of play. But what is play if not to do those things which entail the movement of the hands? Children need to touch, to move all the things which they find in the environment (Montessori 2012). This primary role of hands becomes even more evident since they allow the child to discover the world and himself. The child perfects himself even more by means of his hand than by means of the senses. He can develop himself and the personal talents of his nature when given the opportunity and guidance to produce and to discover by himself. Modern methods of education, in fact, are not only visual, but above all active. (Montessori 2015)

The importance of dealing with physical objects has been reported in many researches, as in (Fails et al. 2005), where the results of the coding yielded several advantages for the physical environments over the desktop environments, and (Verdine et al. 2014), regarding higher cognitive abilities. Some recent works considered smart toys, even though mostly for the assessment of motor abilities (Rivera et al. 2016) or focusing on early interventions on infants (Rihar et al. 2018).

Studies regarding the manipulation of three-dimensional objects, which hence seems to be crucial for an effective PR, starts back in the 1970s, when Shepard and Metzler’s experiments on the “mental rotation” of images brought researchers’ attention to the visual aspects of thought (Shepard and Metzler 1971; Shepard and Cooper 1982). Soon thereafter, Kosslyn and his collaborators produced experimental evidence for the “mind scan” of images (Kosslyn 1973; Kosslyn and Finke 1980).

Objects in general have an essential attribute, the “affordance” (Garrido-Vasquez and Schubo 2014; Gibson 1979), which Gibson describes as the quality of an object that suggests the appropriate actions to be manipulated. In short, affordance is a kind of “call for action”. An invitation that has a neuronal basis, because seeing an object automatically evokes what we could do with it through the activation of a particular class of neurons, the so-called “canonical neurons” (Grezes et al. 2003). These neurons respond to the simple observation of an object, in case there is an intention to act, getting ready to use it. Moreover it seems that “graspable objects exert a more powerful influence on attention and manual responses than images because of the affordances they offer for manual interaction” (Gomez et al. 2018).

In a society where children are usually interacting with smartphones, tablets and computer, the issue becomes even more articulated: Don Norman, an expert in human–machine interaction (HMI), points out that many of the ways we interact with computer technology are not affordances but conventions learned (Norman and Draper 1986). This is also in line with the gain in popularity of the so-called tangible interfaces, based on interactions with physical objects instead of abstract representations behind a screen.Footnote 2

Given the above considerations and since the traditional PR has its fundaments on the interaction with physical objects, we considered important to foster the effects of “smartifing” common real objects, in a rehabilitation context, with the double aim of:

  • Sustaining the rehabilitation process by “gamifying” it, using appropriate engaging audio/visual feedback.

  • Collecting data about the real interaction of the child with the objects (that may be important to assess the child’s motor skills and performance).

As previously described, in traditional pediatric rehabilitation, therapists are in charge of providing stimuli, taking measures while interacting with the children and carefully observing their behavior and listening to their communications (Fig. 1 on the top). All these activities, especially when measurements are needed (i.e. reaction times, amplitude of angles of upper libs during specific gestures, etc.) may result in a complicated setting, also increasing the likelihood of not capturing meaningful conditions, making mistakes in measurements, etc.

Fig. 1
figure 1

Comparison of a usual PR scheme (top) and a computer assisted rehabilitation, based on smart objects and biofeedback (bottom)

The quantitative approach we are proposing, underlying the use of smart objects, could be instead described in Fig. 1 at the bottom: in this case, the therapist may concentrate on the child’s activities, while most of the stimulation and the relative measurements can be carried out by the computer, the devices and the software running on them.

From a quantitative perspective, although there are yet many measures and indexes already available, the ones commonly used in motion analysis [i.e. some of which are reported in (Ferrarin et al. 2005)] hardly fit to this particular application: rehabilitation activities are only seldom based on the stereotyped repetitions of the same movement as in gait or motion analysis. On the other hand, also the comprehensive indexes used to assess the movements spanning over several hours or days (like activity index or activity count—usually collected via actigraphs), are not suitable for short duration real-time acquisition and feedback, where instead it is requested to measure continuously the position and the orientation of specific body landmarks.

Considering other technologies, despite the high and increasing number of rehabilitation systems based on Microsoft Kinect or Leap Motion reported in literature, these devices become less suitable to be considered when dealing with real objects. The presence of objects can be perceived as disturbance by devices designed to derive the position and orientation of limbs (Kinect) and hand/fingers (Leap Motion) freely moving in space without any obstruction along the line of sight.

Finally it is to be noted that although there are yet studies regarding the assessment of some physical performances by using inertial units (i.e. Coker-Bolt et al. 2017), most of them are focused on the analysis and estimate of motion, and they are not using these information to provide real-time feedback to the subjects.

In this paper, we present the protocol and some preliminary results about an explorative and feasibility study on the use of smart objects in the context of a computer-assisted (pediatric) rehabilitation. Such results seem to have a good potential, combining the personalization to the specific capabilities and needs of the child, the intrinsic ecological approach (using common toys/objects) and by exploiting the interactive and quantitative nature of smart systems. The idea derived also from our recent clinical and research experience gathered in the Computer Assisted REhabilitation (CARE) Lab, and its theoretical foundations, which we summarized in the acronym EPIQ (Ecological, Personalized, Interactive, Quantitative) (Olivieri et al. 2018).

2 Methods

The pilot protocol consisted of a set of six exercises, centered on the interaction with a simple object, while performing exercises that mimic common play activities. The exercises (two round-based and four time-based), described in a recent conference paper (Meriggi et al. 2019), are:

  1. 1.

    Roll the ball time based (duration: 1 min).

  2. 2.

    Transfer the ball still from one basket to the other round based (6 rounds).

  3. 3.

    Launch the ball time based (1 min).

  4. 4.

    Throw the ball into a toy basket hoop time based (1 min).

  5. 5.

    Transfer the ball in a specific orientation round based (6 rounds).

  6. 6.

    Bounce the ball off the floor time based (1 min).

The general aim of the exercises is to promote bimanual upper limbs movements in handling the ball, while taking into consideration the position of the hands and trunk inclination. The movements promoted by the exercises included shoulders’ flexion/extension, elbows’ flexion/extension and hands’ pronation/supination, all of which are of key importance in ADL.

We selected a sponge ball as the key element for this exploratory research and we studied the way to measure the movements in an unobtrusive, continuous, simple and safe way. Thus, we selected miniaturized, lightweight and wireless inertial motion units (IMUs), specially designed to be used in human motion studies (Fig. 2). These IMUs (Wavetrack, Cometa S.r.l., Bareggio, Italy) were placed into the sponge ball and into specially designed pockets on custom adjustable straps with Velcro® (Fig. 2a, e), to be firmly positioned on the back of both hands and on the chest of the child. The last three inertial units were also used to measure the angle of the hands and trunk with respect to the vertical axis. Finally, two big colored buttons (yellow and red) were connected via dedicated wireless modules devices (Cometa Footswitch—Fig. 2f) to set the start and stop conditions in the time-based exercises.

Fig. 2
figure 2

a A Wavetrack sensing unit. b Wavetrack receiver. c Setup of the IMU in the ball. d IMU placed in a custom pocket on the back of the hand. e IMU in a belt attached to the chest of the subject. f Plastic enclosure to hold a Cometa Footswitch device to monitor the state of a big button (on the top of the cardboard box)

A custom software, named PRESO (Pediatric Rehabilitation Exercises with Smart Objects) has been developed on purpose, to gather and save the quantitative information measured by the IMUs and to provide audio and video feedback to the child.

Ten hemiplegic children were enrolled: 7 males and 3 females, aged between 3 and 16 years, with cerebral palsy (hemiplegic form) and Intelligence Quotient (IQ) higher than 80. The exclusion criteria were: severe hypovision or hypoacusia or the administration of pharmacological or surgical treatments in the 6 months before the beginning of the study. Caregivers gave their informed consent to the study and each child was scheduled to perform three session of exercises with smart objects. The duration of each treatment varied from child to child, and from session to session, the first ones usually longer (about 40 min) than the second and third (20–30 min) ones.

All the rehabilitation activities took place in the “Low Tech” room in the CARE Lab (Olivieri et al. 2018). The room setting consisted in the following items (see Fig. 3):

  1. 1.

    Computer with a wide screen monitor (40′′), equipped with loudspeakers, to ensure an appropriate perception of the biofeedback by the children in the entire room. The computer runs PRESO software, for the data acquisition and production of biofeedback. The computer and the wide screen were placed in a corner of the room, leaving the appropriate space for all the exercises, with the therapist always in position E.

  2. 2.

    Toy basketball hoop.

  3. 3.

    Two cardboard boxes, to hold the smart ball.

  4. 4.

    Two cardboard boxes holding the start/stop big buttons.

Some special signs were placed on the room’s floor (items a–e in Fig. 3) to indicate the start positions for the various exercises and the two big colored buttons were placed on top of cardboard boxes of appropriate height.

Fig. 3
figure 3

On the left a schematic view of the room. On the right, a picture of the actual setup

2.1 Signal processing and feedback

Among the several systems to assess the body motion, some of which are described in (Perez et al. 2010), we choose inertial units to assess the motion for their peculiar features. Optoelectronic systems (like those produced by Vicon or BTS Engineering), which represent the gold standard for motion analysis, require a great amount of set-up time (often several tens of minutes) and the markers need to be attached onto the skin to produce reliable results, requiring the subject’s body to be largely exposed. Moreover most of these systems do not allow a real time use of the measures collected, since they often require some off-line filtering and reconstruction of the missing information (i.e. due occlusions, etc.), usually done by experienced technicians. There are other commercially available solutions that allow a real-time highly accurate tracking of position and orientation of the sensing device, like electromagnetic based systems (i.e. Polhemus G4). However, they have the drawbacks of the usual limited number of sensing units and the need to have wired connection with cables between sensing elements and the wearable processing and transmission unit, or rather bulky wireless.

Inertial systems, instead, could provide information in real-time, are usually very compact, and allow a prolonged wireless acquisition. The measures we collected from the IMU sensors were the 3D accelerations and 3D angular velocity, respectively within the range ± 2 g (accelerometer) and ± 2000 dps (gyroscope). The drawback of this choice is due to the lack of direct measures of the absolute or relative position and inclination of the units, since IMUs just provide some derived quantities like orthogonal acceleration, and angular velocity.Footnote 3 Thus, instead of trying to introduce complex and potentially unreliable algorithms, we opted for a more simple and straightforward approach, more focused on the efficiency and real time, calculating for each sampling time

  1. (a)

    the moduli of the 3d acceleration and angular velocity,

  2. (b)

    the standard deviation of such moduli, useful to detect when the ball, the hands and the trunk were still or moving.

  3. (c)

    the estimate of the angle between one of the axis of the accelerometer (X for hands, Y for trunk) and the vertical axis, according the projection of the gravity acceleration on the three axis.

PRESO software was designed and developed on purpose using Microsoft.NET C#, to perform data acquisition from the IMUs, and to process and produce suitable audio/video biofeedback. PRESO mainly consists of two different screens, one to select the exercise and verify in real-time that the data acquired from the sensors are ok, and one containing the visual feedbacks related to each exercise. Beside the visual cues, PRESO software produces an acoustic feedback and could also play background entertainment music (selectable by the operator) during the exercises, to enhance the gaming atmosphere. In all exercises, the first audio cue is a whistle to start the activity, then a high frequency beep feedback sound that is produced if one of the sensors on the hands and/or on the trunk are outside the maximum angle with the vertical axis. This angle is customizable according to each child’s characteristics.

In the time-based exercises, the child is asked to wait the so-called “trigger condition” before starting the action to be performed. Such condition consists of remaining for a given percentage of time over a time-window (usually 80% of 1.5 s, but the figure is fully customizable according to each child’s capabilities) in the following body position (referred to as “correct position”):

  • Both hands and trunk upright (the angle is calculated from the 3D accelerometer) and wrist in neutral position.

  • Upper limbs and trunk not moving (standard deviation of accelerometer and gyroscope moduli under a given threshold),

Each exercise begins after the therapist presses the “start” button and may be interrupted anytime through the “stop” button, in case of any problem.

3 Results

All children enjoyed participating in the study, with great interest in the activities performed. As expected, the interaction with a real object (the sponge ball) and the computer-based feedback enhanced the children’s engagement: the key factor seems to be the transformation of usual rehabilitation activities into new interactive ones (i.e. by “gamifying” them). No child experienced any adverse effects while participating in the study and all children completed the planned sessions.

The quantitative information, based on the sensors’ readings, captured during the activities, offers an innovative and operator-independent view that could be helpful in the rehabilitation sessions performed in-hospital, and may become a key factor in those performed at home, without the presence of the therapist.

The therapist had the opportunity to focus on the actual movements of children, without worrying about the measures to be collected (i.e. angles, reaction times, etc.).

Regarding the quantitative insight offered by the sensors’ readings, we may consider it in two different ways: on the one hand the raw signals acquired (i.e. accelerations, angular velocities) may be analyzed for a deeper understanding of the movements performed. On the other hand, some comprehensive derived indexes might be used to classify the kind of movements for a higher-level assessment.

Although a careful reconstruction of the whole trajectory (i.e. position along the time) of freely moving objects is not possible to calculate from the information acquired by the IMUs only [unless for a very limited amount of time (Tsai et al. 2014)], such measures are sufficient to segment the activities in different phases. Even in this simplified view, we may observe the bold difference in the movements performed by the normal and plegic arms (see Fig. 4 regarding the acceleration). It is also easy to identify the segments of the exercises when the hands and the ball may be considered as moving “together” and the phases where the arms are moving separately.

Fig. 4
figure 4

Tracings of exercise 2 for a left hemiplegic child

The graphs shown in Fig. 4 represent the tracings acquired during the execution of a round of the exercise 2 (transfer the ball still from one basket to the other) for a left hemiplegic child. A green line represents the ball, a red line the left hand and a blue line the right hand. Time (X axis) is expressed in seconds, while the Y axis represents respectively the acceleration modulus in g (Fig. 4 on the top), and the angular velocity modulus in degree per second (dps) (Fig. 4 on the bottom), according to the different charts.

Looking at the charts it is possible to divide it into 4 phases.

  1. A.

    Reaction time dashed line 1 indicates the time when the therapist clicks the start button on the monitor, then the tracings remain steady up to point 2, which represents the beginning of the movement.

  2. B.

    Beginning of the movement the child moves from the starting position towards the cardboard basket containing the ball, and she/he grabs it (point 3). In this phase, there are accelerations and rotations of the hands, while the ball is laying still (i.e. no signal).

  3. C.

    Ball carried from one basket to another after the child has taken the ball, its IMU begins to report variations of the parameters similar to those of the hands, since they are moving almost together.

  4. D.

    Child returns to the start position the child puts the ball in the basket (point 4), easily detectable since from that moment the ball shows negligible acceleration and angular velocity. At the point 5 the child gets back to the start position and pushes both buttons.

We may also note that during the whole movement, the left plegic hand’s moduli appear to be smaller than the right healthy one.

Beside a semi quali-quantitative analysis of the tracings, we took into consideration very simple and robust indexes, derived from the 3D accelerometer and 3D gyroscope measures. In this case we focused on the measures collected during the exercise 3 (launch the ball to the therapist), and calculated for each hand and each repetition two indexes:

  • the Time Integrated Rotation Index around the X axis (Fig. 1a) \(TIRI_{x} = \theta_{x} = \sum \left| {\omega_{x} } \right| \cdot \Delta {\text{t}}\) as an index of the overall rotation of the hand.

  • the Time Integrated Activity Index \(TIAI = \left| v \right| = \sum \sqrt {a_{x}^{2} + a_{y}^{2} + a_{z}^{2} } \cdot \Delta {\text{t }}\) [derived from (Hurd et al. 2013)] as an index of the overall contribute of the acceleration modulus to the hand movement.

In Fig. 5a, we observe that plotting this parameter calculated for the left hand (on x-axis) against the right hand (on y-axis), divides the children according the plegic side, as we may expect. Moreover, we may also notice (Fig. 5b) that the mean difference between the two overall rotation indexes \(\Delta_{{\theta_{x} }} = \theta_{{x \left( {left\, hand} \right)}} - \theta_{{x \left( {right\, hand} \right)}}\), clearly distinguish left (\(\Delta_{{\theta_{x} }} < 0\)) and right (\(\Delta_{{\theta_{x} }} > 0\)) hemiplegic children.

Fig. 5
figure 5

a Scatter plot of the indexes \(\theta_{x}\) for the right hand (y axis) against \(\theta_{x}\) for the left hand (x axis) for each repetition along the three sessions. b Mean values of the difference \(\Delta_{{\theta_{x} }} = \theta_{{x \left( {left\,\, hand} \right)}} - \theta_{{x \left( {right\,\, hand} \right)}}\) over all the repetitions performed

This division into two population appears clear also if we consider the median values of the indexes above mentioned calculated for each rehabilitation session (\(\Delta_{{TIRI_{x} }} = TIRI_{{x \left( {left \,hand} \right)}} - TIRI_{{x \left( {right\, hand} \right)}}\), \(\Delta_{TIAI} = TIAI_{left\, hand} - TIAI_{right\,\, hand}\)), as reported in Fig. 6: left hemiplegic children will show an higher contribute related to the right arm, while right hemiplegic children vice versa.

Fig. 6
figure 6

Median values of the index \(\Delta_{{TIRI_{x} }}\) calculated over all the repetitions, reported for each subject (S1,…, S8) for each session performed

Looking at Fig. 7, despite the histogram (b) is substantially overlapping to the one related to the rotation index (Fig. 7b), it appears that the two population (left and right hemiplegic children) are less separated in (a), with respect to the previous scatter plot in Fig. 5a, although a substantial difference still remains.

Fig. 7
figure 7

a Scatter plot of the indexes \(TIRI\) for the right hand (y-axis) against \(TIRI\) for the left hand (x-axis) for each repetition along the three sessions. b Mean values of the difference \(\Delta_{TIRI} = TIRI_{{\left( {left\,\, hand} \right)}} - TIRI_{{\left( {right\,\, hand} \right)}}\) over all the repetitions performed

As we may observe in Fig. 8, there are again some unexpected measures (like the \(\Delta_{TIAI}\) S7, second session), as well as different trends in the results from session to session.

Fig. 8
figure 8

Median values of the index \(\Delta_{TIAI}\) calculated over all the repetitions, reported for each subject (S1,…, S8) for each session performed

As a final remark, there was not a clear correlation between the scales results reporting the severity of children’s deficit and their measured performances, very likely due to the limited sample size and the few sessions performed.

4 Discussion

Despite the small number of subjects and sessions involved in the protocol and limited analysis of data, the methodological approach reported in this paper presents interesting perspectives that deserve to be further fostered. The few tracings examples above introduced show several features, some of which are rather in line with what was expected (healthy limb should perform better than the plegic one), others somewhat controversial (i.e. those parts of the tracings where the plegic hand show higher values than the healthy one). Furthermore, from the measures of the acceleration and angular velocity of the IMUs (segmented as reported in the past paragraph), we may derive several useful indexes, like reaction times, estimates of lateralization, quantitative indication about the child’s motor strategies, etc.

The measures acquired and the derived indexes, may also open novel forms of biofeedback-driven exercises that may increase the efficacy of the rehabilitation process, at the same time offering a continuous quantitative insight. These considerations may become even more relevant if we consider that the usual approach to pediatric rehabilitation consists of few assessments (one at the beginning and—only sometimes—another at the end of the planned sessions), which may not properly track the competencies and frailties that children express in their daily life. Beside being fundamental for engaging biofeedback-based forms of rehabilitation, sensors that measure movements in an ecological manner, could offer a continuous insight which could complement the clinical validated scale for a more thorough view of the rehabilitation process.

Other computer related techniques based on the use of Microsoft Kinect, Leap Motion and other electromagnetic based systems (like Polhemus G4), may appear more suitable for the purpose. However, as described in the introduction, they have the bold limitation of having children to interact only in the virtual representation generated by the computer and not with real objects, whose weight, shape and texture can be physically perceived by children. This aspect can be also relevant, especially if we take into consideration the need to start as early as possible the rehabilitation process in young children, who may not have fully developed the cognitive skills needed for virtual interactions.

It is however important to highlight other limitations of the present study.

  • Since the protocol was aimed primarily at investigating the feasibility and usability of the approach, beside the small number of children and sessions, there is a lack of proper pre and post assessments with clinical scales. This is indeed the main limitation.

  • Not all the interactions were measured (i.e. we do not have quantitative readings about the basket hoop, the pressure exerted by children on the buttons and/or under the cardboard basket that hosted the ball).

  • The segmentation of the tracings has been performed manually, while it would be advisable to have an automatic (or semi-automatic) assessment tool to analyze the tracings.

  • The straps used to hold the IMUs on the back of the hands and on the chest were a bit too bulky, resulting difficult to be placed correctly and to be worn firmly (due to sweat) during the whole exercise, especially for younger (smaller) children. As a result, the relative position of the IMU with respect to the hand, sometimes changed during the exercise.

  • Finally, the setup was a bit too complicated for younger kids, resulting in the need to redesign the overall exercise sequence and look and feel to be used by youngest ones (less than 4 years), or exclude them from the population.

More generally speaking, the present study is also limited from the algorithmic point of view, not taking yet into consideration the growing and promising area of Machine Learning and AI techniques to classify and assess movements and performance from sensors’ readings. These are perspectives that of course deserve to be further studied, having already produced interesting results in the field (Ahmadi et al. 2018; Hagenbuchner et al. 2015; Trost et al. 2016). The difficulty however is also related to the freedom of the performed movements, since the tasks of each exercise could be accomplished by different subjects in completely different ways, according to the nature and severity of each child’s pathology.

Some intrinsic limitation in the approach are due to the specific nature of the IMU, which does not measure the position and orientation but rather its second derivative with accelerometers and first derivative with gyroscopes. We did not take into consideration the magnetometers readings, as previously reported. The lack of direct measures of the actual trajectories of hands and objects represents a major limitation, especially when the connection between them tends to be loose (i.e. while catching or releasing objects like the ball), with respect to a full understanding and assessment of movements.

Potential improvements would consist in using Kalman Filtering and trajectory reconstruction techniques that could help bridging the gap between the measures acquired and the absolute or relative position/orientation of objects and limbs. Also, possible future system’s enhancements could be related to a further miniaturization of electromagnetic wireless devices (like Polhemus Liberty™ Latus™ or Patriot™), currently still not feasible to be set into small objects, nor onto the hands of small children, due to the size and weight. Future hybrid optical/inertial solutions would also greatly contribute, especially considering devices like the upcoming Microsoft Kinect for Azure or the Leap Motion systems, leading to completely new scenarios in the pediatric rehabilitation settings.

The concept of Smart Object will also evolve in the near future, as it is rapidly growing the literature on the use of compact multimedia devices and small robots that are not only capable to sense and transmit, but also to interact (with sounds and lights or event gesture—i.e. humanoids) with children.

We are conscious that these may be the first steps in a new territory. Nevertheless, the tracings and the comprehensive indexes charts reported in paragraph 3 clearly show that novel scenarios where suitably “smartified” common objects (toys, pencils, cutlery, glasses, etc.), may be used to record the interactions with the subject’s limbs, may enhance the recovery or the improvement of impaired functions in hemiplegic children.

5 Conclusions and future work

From the experience gathered so far with this feasibility study, “smartified” objects seem to have a great potential in rehabilitation, not only in terms of appeal and engagement, but also from a quantitative and personalized perspective. This could provide therapists and clinicians useful data, to tailor the therapeutic path and therefore ensure the best exercise for each child in every single rehabilitation session. Moreover, the intrinsic ecological and interactive nature of “smart objects” could definitively boost the “last mile” in the rehabilitation process, by affecting those things used in common daily activities. All this becomes even more important if we consider the increasing demand for continuity of care and good quality home-based rehabilitation services. These considerations may even lead to the idea of considering a novel area in the AAL researches, specifically focused on the Ambient Assisted Rehabilitation, as proposed in the title of this work.

Of course, further studies are needed to thoroughly understand the relationship between measures collected and actual movements performed in activities that may be not natively well-defined, as the ones used for the human motion analysis of specific gestures (i.e. taking three steps in a gait analysis assessment). In case of home or ambulatory based settings, moreover, specific studies are needed to foster not only such relationship, but also how to process the information and provide real-time and off-line feedback to the subject, given the absence of the therapist during the activity.

Improvements are also needed in defining proper indexes, selecting the more appropriate exercises (i.e. throwing the ball into the basket hoop led to somewhat fuzzy results for the lack of information regarding the interactions of the ball with the environment), and possibly by designing new ones. For this reason, we are currently defining a larger study, based on the achievements of the work presented in this paper. This study, consisting in a higher number of children spanning over ten rehabilitation sessions, will also include other activities, an overall revision of the PRESO software, and, of course, pre and post assessments with clinically-validated scales (like Melbourne Assessment Scale 2 (Randall et al. 2014), AHA (Krumlinde-Sundholm et al. 2007), etc.). Such a study would explore, with a larger number of acquired data and the comparison with clinical scales, the efficacy of this novel rehabilitation approach.

In the future, new “smartified” objects could also be designed and used, with the aim of replacing or hiding them into nicer and better looking objects, instead of the straps needed to place the sensors on the hands and on the chest. Moreover, smartification of useful objects (like glasses, cutlery, toothbrush, etc.) may be also taken into consideration to deal with children’s daily life. We expect also to involve a wider number of different smart (or smartified) objects, given their increasing presence in children’s environment (i.e. mobile robots, smart toys, etc.). In the longer term, and in a wider AAL perspective, interaction with “environmental” smart objects (like smart fridge, oven, tubs, etc.) could be also considered. A particular attention, in this perspective, should be also paid to the relationship among Ambient Intelligence, Assistive Technologies, Computer Assisted/Aided Rehabilitation. Further studies and analysis should hence also be carried out in this direction, for there are a number of “opportunities, challenges and open problems” (Darwish et al. 2019). In a connected world, the capability of smart environments (i.e. native smart objects, smart environments, common objects smartified, hybrid systems, cloud based solutions etc.) to transform quantitative measures into real quantitative and valuable knowledge about the human activities, may result fundamental in assessing and sustaining the recovery, improvement or development of motor and/or cognitive competences.

As a final consideration, we believe that novel forms of rehabilitation (Pediatric Rehabilitation 2.0), based on an Ecological, Personalized, Interactive and Quantitative approach (Olivieri et al. 2018), will very likely become routine. In this perspective, the use of smart objects and smart environment, what could be defined as “Ambient Assisted Rehabilitation”, would represent one of the key steps in this process.