1 Introduction

People closely interact with soft objects such as cushions, plush toys, and pillows that exist in our everyday living space. By measuring the interaction between people and these soft objects, we can construct a user interface that blends the everyday environment [9], recognizes the users’ behavior in their daily lives, and provides services according to the users’ condition [2, 11, 14].

Many researchers have attempted to construct soft user interfaces. For example, Tominaga et al. and Vanderloock et al. developed soft user interfaces consisting of conductive fibers and soft material such as wool [12, 13], and Hiramatsu et al. developed a soft ball interface by placing an atmospheric pressure sensor inside a soft ball to sense the deformation of the ball [1]. However, these studies focused on creating original soft interfaces rather than developing techniques to use existing soft objects as interfaces.

There are several advantages to using an existing soft object as an interface. First, using soft objects with which people are already used to having in their living space makes it possible to measure users’ usual movements. Second, since it eliminates the need to purchase soft objects with built-in sensors, costs can be reduced. Third, it is possible to create an interface using soft objects that match the users’ preferences. Because existing approaches [9, 15] place sensors inside of soft objects, it is difficult to insert the sensors without damaging the users’ favorite soft objects.

We thus propose a system that turns a soft object into a touch interface in a non-intrusive manner (Fig. 1). Our system measures the shape of deformations of a soft object and estimates the touch position using detachable photo sensor modules. To prove the concept, we conduct an experiment to evaluate the estimation accuracy of the touch positions using the developed module.

Fig. 1.
figure 1

Detecting a touch position using photo sensor modules.

2 Related Work

2.1 Soft Sensors and Soft User Interfaces

Previous studies have developed soft sensors to detect haptic interactions using various sensing devices such as a phototransistor and multiple IR LEDs [3], conductive rubber [16], gel [17], and fabric [18]. As those methods require creating a dedicated soft element for sensing, we propose a method to turn existing soft object surfaces into sensors.

Kamiyama et al. proposed a camera-based method for detecting deformations of a soft material by tacking positions of embedded markers [4]. The method allows accurate three-dimensional force vector detection using their GPGPU technique. Sato et al. proposed an alternative camera-based detection method based on polarization [8]. However, it is difficult to apply these methods to existing soft object surfaces available in people’s daily lives due to spatial restrictions and shielding issues.

To overcome these restrictions, Sugiura et al. proposed a method to enable people to interact with soft objects without changing their softness by embedding a photo sensor module inside a soft object containing cotton [9]. However, for soft objects that cannot be opened, such as cushions, it is necessary to cut open the object to insert the sensor module, thus damaging the object. To avoid this damage, our proposed method turns soft objects into interfaces without causing any damage by attaching the module to the outside of a soft object.

Yagi et al. developed a system that measures the state of the cushion by sewing an acceleration sensor, photo sensors, and touch sensors to the back of the cushion cover [14]. This system can estimate the user’s behavior and intention from the cushion state and can thus be used to control the environment, such as the lighting and sound. However, since the touch sensor is sewn near to the surface of the cushion, the user might feel the hardness of the sensor when touching the cushion. We therefore construct a system that allows touch sensing without eliminating any of its softness.

Sugiura et al. developed a ring-type device that can convert existing plush toys into interactive robots [10]. The device, which has a built-in sensor and motor, can drive plush toys while attached to the arms, legs, and tail. Because there is no need to cut open the plush toys, they are not damaged. However, since this device uses sensors to measure the joint angles of the plush toys, it cannot be attached to an object that has no joints. It is thus difficult to attach it to soft objects such as cushions and pillows.

2.2 Detecting Shape Deformation Using Photo Reflective Sensors

Photo reflective sensors measure the reflection intensity of emitted light, which can be changed by the distance and reflection properties of a reflective object. Considerable research has been conducted to estimate the deformation of soft objects caused by human skin making contact with the surface of the sensor. Nakamura et al. proposed a device that intuitively controls augmented reality information according to eyebrow movement. The system, which has one photo reflective sensor, detects eyebrow movements by the amount of reflected light [5]. Ogata et al. developed a system to use skin as a soft interface through a band-type device [6]. Photo reflective sensors are installed on the back of this device, and the sensor measures a change in the distance to the skin and estimates deformations in the skin. In this paper, we estimate the deformation of a soft object by measuring the distance between the soft surface and the photo reflective sensors.

3 Detecting Touch Position Using Photo Sensors

Photo reflective sensors consist of an infrared light (IR LED) and a phototransistor, which senses the light transmitted by the IR LED. When an object is placed near to the photo reflective sensor, the light from the IR LED is reflected by the object and detected by the phototransistor. Hence, a photo reflective sensor is an effective tool for measuring distances between objects. Since the reflection intensity of the IR LED varies depending on the distance between the photo reflective sensor and the object, the distance to the object can be measured. When the object is deformed, the distance between the sensor and the object changes; thus, the deformation of the object can be estimated from the change in the sensor values.

We develop a belt type sensor module that can be attached to the outside of soft objects. Figure 2 shows the principle of the proposed sensing method. Several photo reflective sensors are placed on the back of the belt of the sensor module. The shape deformation of the soft object is estimated by measuring changes in the distance between the soft object and the photo reflective sensors. When the surface of the soft object is touched, the elasticity of the periphery of the touched position becomes deformed, and the touched position can be estimated, even if it is positioned away from the sensor module.

Fig. 2.
figure 2

Principle of touch sensing using photo reflective sensor modules attached to the soft object.

4 Implementation

4.1 Overview of Our System

Figure 3 shows a system overview of our touch sensing system. In this system, the photo sensor modules obtain sensor values from a deformable object and label the touch positions as a training dataset. After learning the training dataset, classifiers can estimate the current touch point from the inputted sensor values. We used Support Vector Machine (SVM) to train the classifiers. The sensor values are transmitted and used to make the classifiers on the PC.

Fig. 3.
figure 3

Overview of the sensing system.

4.2 Hardware

Figure 4 shows an overview of our prototype of the photo reflective sensor module, which consists of multiple photo reflective sensors (KODENSHI SG - 105) and a microcontroller (Arduino Micro). Resistances of 300 Ω for the IR LED of the photo reflective sensor and 10 k Ω for the phototransistor were attached to the sensor module. Three optical sensors were installed on the belt, 7 cm apart. Since photo reflective sensors are not sensitive at a distance shorter than the minimum measurement distance, a tiny donut-shaped plastic isolator (0.3 cm thickness) is attached to each sensor to keep the deformable surface at a sensible distance. As the AT mega 328 microcontroller has a 10-bit A/D converter, the output of the photo reflective sensor is between 0 and 1,023. To detect the positon of touch on the cushion, two belt type sensor modules were constructed and attached.

Fig. 4.
figure 4

Prototyped belt type module.

Figure 5 shows how the module is attached to the cushion. The cushion is filled with polyester cotton and covered with cotton fabric (30 cm × 30 cm). The actual size after filling the cotton was 24 cm (length) × 24 cm (width) × 10 cm (height). The sensors of the belt attached to the left side of the cushion are S1, S2, and S3 in order from the side closest to the microcontroller, and the sensors of the belt attached to the right side are S4, S5, and S6 in the same order. Six sensor values are used as sensor data.

Fig. 5.
figure 5

A cushion equipped with two photo reflective sensor modules.

4.3 Method of Estimating the Touch Position

4.3.1 Filtering Photo Reflective Sensor Value

Since a photo reflective sensor is affected by ambient light, it is necessary to use a filter to reduce noise. Before estimating the touch position using the machine learning technique, we applied an IIR low-pass filter(cutoff frequency: 1.7 Hz).

4.3.2 Estimating the Touch Position Using SVM

The area between the two modules was divided into nine areas, and markers were attached as shown in Fig. 6. The markers are labeled A, B, C, D, E, F, G, H, and I in order from the top left. Figure 7 shows the sensor data when A, E, and I are touched directly from above with a force gauge (A&D Company AD-4932A-50N, resolution: 0.01N). The force gauge is fitted with a cylindrical part similar in shape to human fingers with a diameter of 1.5 cm. When a force of 7N is applied in different positions, the sensor values change, as shown in Fig. 7, depending on the position touched. We normalized a range of the photo reflective sensors from 0 to 1 based on averaged data obtained from 10 measurements, which were recorded by each photo reflective sensor when force was applied in each place.

Fig. 6.
figure 6

Arrangement of markers on a cushion.

Fig. 7.
figure 7

Sensor data when touching each position.

Using the SVM approach, the Support Vector Machine for Processing (PSVM) library was used for the implementation [7]. We first prepared a direction dataset for estimating the touch position. For the training, the user touches the soft object and accumulates the learning data by recording the data of the sensor when touching nine areas on the soft object. The PSVM provides the probability of how close the input is to each position. Based on the probability, each position is weighted. This enables the system to not only recognize the basic nine areas, but also calculate a position on the 2D plane to which the force is applied.

5 System Evaluation

To evaluate this system, we conducted an experiment to validate the classification accuracy in ten touching conditions (non-touch and touch conditions at marked locations on the cushion). To investigate the classification accuracy in various touching conditions, we conducted the experiment by changing the touch force in four conditions (1N, 3N, 5N, and 7N).

5.1 Conditions

To control the touch force to the cushion, a force gauge (A&D Company AD-4932A-50N, resolution: 0.01N) with 1.5-cm-diameter cylinder parts attached at the tip was used. Sensor values were measured when touched for each force of 1N, 3N, 5N, and 7N. The cushion used is the same as that described in Sect. 4.3. The arrangement of the markers was set as shown in Fig. 6. The captured dataset was divided into two: training data in the first half and test data in the second half.

5.2 Result

5.2.1 Accuracy of Training Each Force Data When Applying the Same Force

Figure 8 shows the accuracy when using the same force data during training and testing. The average accuracy is 93.7%.

Fig. 8.
figure 8

The accuracy of using the training data for each force.

As shown in Fig. 8, higher force conditions showed higher accuracy except for in the 7N condition. However, the lowest accuracy (83.9%) was achieved in the 1N condition. Table 1 shows the confusion matrix of the result in the 1N condition.

Table 1. Confusion matrix of the result when touching with IN.

Table 1 shows that confusion occurs between non-touch, B, and H. The locations of B and H are furthest from the sensor modules in the experimental conditions. In the 1N condition, the intensity and the spatial size of the deformation were small. Therefore, when touching those points with a weak force, the deformation might not be enough to be classified.

It is thought that the accuracy increased as the touching force became larger because the range in which the deformation of the cushion propagated became wider and the change in the sensor value became relatively large. To improve the accuracy in a weak force condition, it is necessary to shorten the distance between the modules.

5.2.2 Accuracy of Training Each Force Data When Applying Different Forces

Table 2 shows the accuracy when training each force data and applying different forces. An accuracy of 80% or more is displayed in bold letters.

Table 2. Accuracy with training each force data and applying each force (%)

The test dataset in the 1N force condition showed a low accuracy when we applied the training datasets of the other conditions. Table 3 presents a confusion matrix of 1N test dataset, which shows the average accuracy of the 3N, 5N, and 7N training dataset conditions.

Table 3. Confusion matrix (Test data: 1N, Training data: 3N, 5N, 7N).

According to Table 3, confusion occurs between non-touch and B, and most of the results are concentrated in these two classes. In other words, when the training data involves all data other than 1N data, if the user touches with 1N, it is likely to be determined as a non-touch. For the system to detect a weak force such as 1N, it is necessary to prepare training data of the same force.

5.2.3 Accuracy of Training All Force Data When Applying Different Forces

To check the possibility of a universal classifier among the force conditions, we applied all the training datasets (1N, 3N, 5N, and 7N) and made a classifier. Figure 9 shows the accuracies when testing all of the test data; the average accuracy is 79.0%. As shown in Fig. 9, the accuracy when touching with 1N force is the lowest. As described above, it is considered that a 1N input is identified as non-touch.

Fig. 9.
figure 9

The accuracy of all training data.

The accuracy when touching with a force of 3N is lower than when training 3N force data only. However, the accuracy when touching with forces of 5N and 7N is 98.0%, which is almost the same as when training the force of only 5N or 7N data.

When we excluded the test data of the 1N condition, it showed a 92.0% classification accuracy. Thus, a force of >3N shows a good performance among the different force conditions.

6 Application

We developed a prototype of our proposed system (Fig. 10), which enables the user to interact dynamically with the content shown on a screen by crushing a plush toy. When the user touches the plush toy, soap bubbles are displayed on the screen. The content also changes depending on the strength of the user’s applied force. The user can use their favorite plush toy as the user interface. This system has the potential to use dynamic and emotional gestures such as hitting, stroking, and hugging as an input method. This system could thus be used in applications such as entertainment, therapy, and rehabilitation.

Fig. 10.
figure 10

Example application.

7 Limitations and Future Work

This research study has several limitations, which we incorporate in our plans for future work. We tested the recognition rate in a deformable object. Future works could thus examine the recognition rate with different materials, shapes, sizes, and softness levels.

While we detected the touch positions in this study at nine points on a cushion, in future studies, we aim to recognize gestures on soft objects using continuous recognition results.

The photo reflective sensors were found to be affected by ambient light such as from an electric lamp or sunlight. We thus plan to implement a filtering system using light modulation that separates IR light emitted by the LEDs in the photo reflective sensors from ambient light.

Our proposed sensor module is wrapped around a soft object such as a cushion; however, the position of the module gradually shifts when using it. We therefore plan to add clips or safety pins to fix the sensor module in place.

Furthermore, deformable objects show hysteresis over long-term use. Future work should thus test the repeatability over a long time frame.

8 Conclusion

In this paper, we proposed a sensing system to turn soft objects into soft touch interfaces using belt type photo reflective sensor modules. In our proposed system, the change in reflected intensity due to the deformation of the surface is captured by the reflective optical sensor modules placed on the deformable object. The touch position is estimated using an SVM machine learning technique.

To evaluate the estimation accuracy of the touch position to the deformable object, we measured the recognition rates by touching nine points on the cushion with various touch forces (1N–7N). When using a training and test dataset with the same touch force, the estimated touch position achieved an accuracy of 93.7%. Furthermore, by excluding the dataset of the weakest force (1N), the average recognition rate was 92.0%, even when mixing the test dataset. It is considered that the proposed method shows a good performance at certain force conditions (3N-7N).