Advertisement

Automated Drawing Recognition and Reproduction with a Multisensory Robotic Manipulation System

  • Anna WujekEmail author
  • Tomasz Winiarski
Conference paper
  • 1.5k Downloads
Part of the Advances in Intelligent Systems and Computing book series (AISC, volume 440)

Abstract

Article presents a multisensory robotic system, that is reproducing contour drawings. Initially the system detects a sheet of paper with a reference drawing, determines contours of the drawing, and then draws contour image on a blank sheet of paper. The reproduction conserves features of the original drawing—shapes, location and scale. The system was created with usage of an embodied agent theory. In this article two main parts of a designed system are presented—vision module (virtual receptor) and control subsystem. System is verified on a modified industrial manipulator acting as a service robot with an eye in hand camera and a force/torque sensor mounted in the wrist.

Keywords

Service robot Controller design Sensor fusion 

1 Introduction

Service robots are useful in large variety of applications: technical, medical, domestic, even entertaining. In contrast to industrial robots, they are designed to work in partially unstructured and dynamically changing environment, also to interact with people [1]. Therefore, to successfully control service robots, it is crucial to provide data from various sensors [2]. People use mostly sight and touch to move and manipulate different types of objects, hence service robots are usually equipped with cameras and force/torque sensors. Image analysis, contact detection and force/torque measurement are vital to successfully move robot and manipulate objects effectively and safely.

In this paper, we present a multisensorical robotic system, that is reproducing drawings. Basing on image analysis of a reference drawing, its contour representation is generated and drawn. Agent-based and component approaches have been used to design this system.

The paper is organised as follows. In Sect. 2, various drawing applications of robots and related work are discussed. Then, in Sect. 3, an experimental setup of our system and its general structure are described. Virtual receptor processing images is presented in Sect. 4, control subsystem and virtual effector controlling robotic arm are presented in Sect. 5. Section 6 presents the experimental verification of the proposed approach to reproduce drawings performed on a real robot. It contains description of the hardware and the software used during experiments, as well as the results of these experiments. Finally, Sect. 7 concludes paper and presents future work.

2 State of the Art

In recent years, a number of drawing applications for robots have been developed. Most of them are mainly entertaining and artistic projects, presenting computer vision and trajectory planning algorithms [3]. Many systems consist of robotic arm for manipulation and camera for image acquisition, however the other approaches are to reproduce an artist hand motion registered with force sensor [4]. Usually robot is prepared to draw specific objects, like human face [5]. Some authors focused on a robot behaviour to imitate human artist [6]. Many papers present robots painting with flexible brush for best artistic results, e.g [7], which requires precise control of pressure and slope of a brush. To perform more complex drawing, e.g. on a non-planar surface, robots are equipped with force sensors for contact detection [8]. Typically, there are two main problems, that authors of drawing applications for robots are facing: image analysis in order to determine contours of an object seen with a camera, and trajectory generation, that depends on surface shape, drawing tool and type of drawing.

Image processing and computer vision allows robot to behave autonomously and react to different situations without human interaction. Among other sensors, camera is the one, that provides a lot of useful data, that can be used for many purposes: determining position and orientation of a robot [9], location of people or objects around robot, objects or collision detection [10], quality control [11] etc. The key is to process images in an efficient and optimised way, so robot can react as fast as possible to avoid collisions and damages [12].

Concept of an embodied agent, that is an agent with a body interacting with real environment [1], is a way to describe a robotic system decomposed into a set of subsystems. This decomposition of the robotic system allows to distinguish: real and virtual receptors, real and virtual effectors, and control subsystem [13]. All of these components communicate with each other through communication buffers. In particular, in service robots real receptors include different types of 2D and 3D camera devices, providing data to be stored and processed with virtual receptors [14]. A robot hardware with low level joint controllers and encoders are robot’s real effectors. Manipulator virtual effectors, general for various tasks but specific for the robot, execute the control loop in the task space of a robot, interpolate trajectory, etc. On top of the whole system, there is a control subsystem, specialized for specific tasks.

3 Experimental Setup and Ambodied Agent Structure

Experimental setup to reproduce drawings is presented in Fig. 1b. It consists of: a robotic arm with sensors (eye in hand camera and force/torque sensor mounted in the manipulator wrist), the tool able to hold a pen, and a horizontal platform with a paper sheet on which the robot can draw. Camera provides images of a drawing to be reproduced and images needed to locate a paper sheet. Force sensor is necessary to determine contact with surface during drawing, because the vision module produce only the rough approximation of the distance from the robot end–effector to the paper sheet [15].
Fig. 1

a General system structure. b The experimental setup and the coordinate systems: 0—base frame of a robot, W—frame of the sixth joint of a robot (wrist), C—camera frame, D—camera optical frame, S—frame of a sheet of paper, and I—frame of camera image

The overall structure of a developed embodied agent is depicted in Fig. 1a. Camera device, providing images for further analysis, is the real receptor connected directly to the virtual receptor, that consists of two elements: camera driver and vision module, processing images (Sect. 4). Robot hardware states the real effector, in this case performing a drawing, connected to the virtual effector (robot hardware abstraction layer). The control subsystem, specialised for a task of drawing, communicates with vision module and basing on that data prepares commands for the virtual effector.

4 Virtual Receptor

Image processing in drawing reproduction system consists of several phases (Fig. 2). The acquisition of image is performed with camera attached to robotic arm, controlled with camera driver. The obtained image is noised and distorted, which is a typical problem with optical devices. Its quality improvement is achieved with Gaussian blur algorithm to eliminate salt and pepper noise, and remap algorithm based on previously computed camera calibration parameters (focal lengths, principal points offsets and distortion coefficients) to eliminate distortion. The remap algorithm starts with computation of the joint undistortion and rectification transformation \(map_x\) and \(map_y\), and then each pixel is remapped, according to (1)
$$\begin{aligned} \forall _{p' \in I} \;\; p'(i,j)= & {} p(map_x(i,j),map_y(i,j)) \end{aligned}$$
(1)
where \(p'(i,j)\) is a pixel in the row i and the column j of output image I, p(ij) is a pixel of input image.
Fig. 2

Structure of a vision module, presenting sequence of image processing and data flow

The next step of image processing is a segmentation, that is classifying pixels of image into background, that will not be further processed, and objects to be processed. This information will be used to determine the location of a sheet of paper and to determine the features of a drawing. Segmentation consists of selection based on the colour of a pixel and morphology transformations, which smooth the image and improve its quality. Two types of morphology transformations are used: opening and closing. The result of this step is binary image, where white pixels show the area of paper sheet. Then, the position of paper sheet on image is determined. Paper sheet is rectangular, hence position can be specified by coordinates of four corners. At first, the edges in the binary image are found with Canny edge detector [16]. The result is binary image, where white pixels indicate edges. Then, system detects long straight lines in edges with modified probabilistic Hough transform [17]—one for each edge of a paper sheet. Each line is represented by a 4-element vector \((x_1,y_1,x_2,y_2)\), where \((x_1,y_1)\) and \((x_2,y_2)\) are ending points. The intersections of these lines indicates the coordinates of corners. After small correction based on input image with function using Harris algorithm [18], that finds the most prominent corners in the image, coordinates of corners represented by 4-element vector of points \(\{(x_1,y_1),\ldots ,(x_4,y_4)\}\) are provided to the next phase of image processing.

The following step is the detection of contours. At first, rectangular mask with shape of paper sheet is put on input image so all pixels not belonging to area of paper sheet become black. Then, thresholding is performed to extract lines of drawing from the area of paper sheet. Due to the fact, that lines are thicker than one pixel, morphological skeleton [19] (topologically equivalent thinned version of the original contours) of lines is computed.

After morphology operation to smooth lines, the image is ready for contour detection with algorithm [20]. This algorithm produces a list of contours determined in image I:
$$\begin{aligned} {^{I}l}_{c'} = \{(p_{11},p_{12},\ldots ),(p_{21},p_{22},\ldots ),\ldots \} \end{aligned}$$
(2)
where each contour consists of a list of its points and \(p_{ij} = (x_{ij},y_{ij})\). The distance between any two adjacent points of contour is 1 pixel,
$$\begin{aligned} dist({^{I}l}_{c'}[j][k],{^{I}l}_{c'}[j][k+1]) = 1\;pixel \end{aligned}$$
(3)
where j is j-th contour a k is its k-th point. Moreover, each contour is a closed curve, so even the contours of thin lines will be doubled. To avoid drawing the contour twice and to store all pixels of contours, new list of corrected contours \({^{I}l}_{c}\) in frame of image I is created with Algorithm 1, performed for each contour c, where \(dist_{min},dist_{max}\) are arbitrary determined minimal and maximal distances between two subsequent points in one contour (if \(d < dist_{min}\), they are considered to be the same point, if \(d > dist_{max}\), they are assigned to two different contours).
The next step of image processing is to represent contours in the frame of paper sheet S (origin is in one of the corners, X and Y axes overlap edges of paper sheet, Z axis is perpendicular to the surface of paper sheet). A \(3\times 3\) projective homography H between frame of image I and frame S is found based on coordinates of paper sheet corners \(x = \{(x_1,y_1),\ldots ,(x_4,y_4)\}\) in frame I and corresponding model corners \(x' = \{(0,0),(s_w,0),(0,s_h),(s_w,s_h)\}\) in frame S, according to (4), where \(s_w\) and \(s_h\) are dimensions of paper sheet in m.
$$\begin{aligned} x'= & {} Hx \end{aligned}$$
(4)
Later, each point of list \(^Il_c\) is transformed with projective homography H, according to (5).
$$\begin{aligned} \forall _{i,j}: {^{I}l}_{c}[i][j] \in {^{I}l}_{c} \, {^{S}l}_{c}[i][j]= & {} H {^{I}l}_{c}[i][j] \end{aligned}$$
(5)
After that, coordinate Z is added to each point to get list \(^Sl_c\), where each point is \((x_{ij},y_{ij},0)\) (it is assumed that paper sheet with a drawing is a planar surface, and each point of a drawing is located on a surface indicated by X and Y axes).

Last step is calculation of homogeneous transformation matrix \(^{D}_{S}T\) relating frames D and S. The model of a paper sheet is created based on previously provided \(s_w\) and \(s_h\) and coordinates of corners in input image. This model consists of two vectors: \(\{(x_1,y_1),\ldots ,(x_4,y_4)\}\), that are coordinates of corners of paper sheet in image and \(\{(0,0,0),(s_w,0,0),(0,s_h,0),(s_w,s_h,0)\}\), that are corresponding coordinates of model paper sheet in coordinate system of paper sheet. Based on that, using Perspective’n’Point algorithm [21], homogeneous transformation matrix \(^{D}_{S}T\) is computed. After image processing, the resultant data is sent to control subsystem. The virtual receptor output buffer consists of the list of contours to be drawn \({^{S}l}_{c} = \{(p_{11},p_{12},\ldots ),(p_{21},p_{22},\ldots ),\ldots \}\), where \(p_{ij} = (x_{ij},y_{ij},0)\), related to the frame of paper sheet S, and transformation matrix \(^{D}_{S}T\).

5 Control Subsystem and Virtual Effector

Control subsystem prepares commands to generate and execute trajectory in the virtual effector. Based on the data received from virtual receptor, current robot position and placement of a camera, coordinates representing contours to be drawn in robot base coordinate system are calculated, according to (6) and (7),
$$\begin{aligned} ^{0}_{S}T \,= & {} \, ^{0}_{W}T \, ^{W}_{C}T \, ^{C}_{D}T \, ^{D}_{S}T \end{aligned}$$
(6)
$$\begin{aligned} \forall _{i,j}: {^Sl}_{c}[i][j] \in {^Sl}_{c} \;\; {^0l}_{c}[i][j] \,= & {} \, ^{0}_{S}T \, {^Sl}_{c}[i][j] \end{aligned}$$
(7)
where \(^0l_{c}[i][j] = p(x_{ij},y_{ij},z_{ij})\) is the point in base coordinate system, \(^Sl_{c}\) is the list of points provided by virtual receptor (vision module). The resultant list of contours is drawn with the Algorithm 2:

6 Verification

Task of drawing requires robotic arm with a pen attached to its end-effector (Sect. 3). The modified 6DOF IRb-6 manipulator (Fig. 3a) was used to perform the tests. The robot was equipped with modern electronics [22] and software components replacing the industrial ones. The gripper had two coupled, parallel fingers (Fig. 3b). GIGE camera was attached to manipulator end-effector. The system was calibrated according to procedure [23]. For the purposes of this project, a compliant tool to hold a pen was designed (Fig. 3c).
Fig. 3

The real robot performing the task of drawing (http://youtu.be/BvF8Cou4Qpc). a IRp-6 robot. b End-effector and camera. c Compliant tool. d Drawing

The camera attached to the robot wrist (Fig. 3b) was a Point Grey Blackfly BFLY-PGE-14S2C-CS with 1.4MPix resolution. The camera was equipped with LG Security LC-M13VM2812IRD lens with F1.4 shutter and focal length 2.8–12 mm. The camera parameters (aperture, focal length and focus) were adjusted with moveable rings. Additional acquisition parameters (e.g. brightness) were set with camera driver.

In general, our robot controller is implemented with usage of Robot Operating System (ROS) [24] (set of software libraries and tools that help building robot applications) and OROCOS [25] (set of modular, configurable components for real-time robot applications). To control robots, IRPOS (IRb-6 robot virtual effector API) was used. Image processing was performed with DisCODe [26] (Distributed Component Oriented Data Processing), a framework for fast sensor data processing. A large number of drawings have been reproduced to test the designed system. All of them were correctly analysed and reproduced with only small imperfections (if input drawing was properly prepared, and requirements regarding lighting and paper position have been met). Test drawings included random shapes (e.g. lines, rectangles, triangles, circles, dots) and drawings of contours of particular things (e.g. castle, text). Different colours, thicknesses and distances between shapes has been tested as well as complex shaped consisting of many crossing lines. Experiments indicated, that any contour drawing, even the very complex one, can be correctly reproduced. The example of drawing prepared by human, contours found with vision module and final drawing are shown in Fig. 4.
Fig. 4

Example of original drawing and reproduction. a Original drawing (by human). b Contours found with vision application. c Reproduced drawing (by robot)

7 Conclusions

Robotic arm with appropriate control system can successfully reproduce contour drawing. The performed tests indicated, that for our experimental station optimal thickness of lines on input drawing is 1–5 mm (if it is less the line is ignored, if it is more the reproduced line is deformed), and the minimal distance between endings of two separated lines is 4 mm (if it is less the neighbouring lines are connected in the result drawing). During the experiments some problems have been found: dark dots in the places, where pen touches paper for the first time (the problem is partially solved by the compliant pen holder), undesirable breaks in lines and small transition of the whole result drawing. All these problems do not significantly affect the system. Nevertheless, they can be fixed during system development.

There are many interesting modifications and improvements, that can be applied to the system: recognition of filled areas besides contours, drawing recognition and reproduction with several different colours, drawing on inclined or non-planar surface. Thanks to an embodied agent based decomposition of a system into two main modules (vision and control) different robots (real and virtual effectors) can be used for task of drawing with the same image processing subsystem (virtual receptor).

Notes

Acknowledgments

This project was funded by the National Science Centre according to the decision number DEC-2012/05/D/ST6/03097.

References

  1. 1.
    Zieliński, C., Winiarski, T.: Motion generation in the MRROC++ robot programming framework. Int. J. Robot. Res. 29(4), 386–413 (2010)CrossRefGoogle Scholar
  2. 2.
    Winiarski, T., Banachowicz, K., Seredyński, D.: Multi-sensory feedback control in door approaching and opening. In: Filev, D., Jabłkowski, J., Kacprzyk, J., Krawczak, M., Popchev, I., Rutkowski, L., Sgurev, V., Sotirova, E., Szynkarczyk, P., Zadrozny, S. (eds.) Intelligent Systems’2014. Advances in Intelligent Systems and Computing, vol. 323, pp. 57–70. Springer International Publishing (2015)Google Scholar
  3. 3.
    Jean-Pierre, G., Said, Z.: The artist robot: a robot drawing like a human artist. In: 2012 IEEE International Conference on Industrial Technology (ICIT), pp. 486–491 (March 2012)Google Scholar
  4. 4.
    Zieliński, C., Winiarski, T.: General specification of multi-robot control system structures. Bull. Pol. Acad. Sci.—Tech. Sci. 58(1), 15–28 (2010)Google Scholar
  5. 5.
    Lin, C.Y., Chuang, L.W., Mac, T.T.: Human portrait generation system for robot arm drawing. In: IEEE/ASME International Conference on Advanced Intelligent Mechatronics. AIM 2009, pp. 1757–1762 (July 2009)Google Scholar
  6. 6.
    Tresset, P., Leymarie, F.F.: Portrait drawing by paul the robot. Comput. Graph. 37(5), 348–363 (2013)CrossRefGoogle Scholar
  7. 7.
    Junyou, Y., Guilin, Q., Le, M., Dianchun, B., Xu, H.: Behavior-based control of brush drawing robot. In: 2011 International Conference on Transportation, Mechanical, and Electrical Engineering (TMEE), pp. 1148–1151 (Dec 2011)Google Scholar
  8. 8.
    Jain, S., Gupta, P., Kumar, V., Sharma, K.: A force-controlled portrait drawing robot. In: 2015 IEEE International Conference on Industrial Technology (ICIT), pp. 3160–3165 (March 2015)Google Scholar
  9. 9.
    Chenavier, F., Crowley, J.: Position estimation for a mobile robot using vision and odometry. In: 1992 IEEE International Conference on Robotics and Automation. Proceedings., vol. 3, pp. 2588–2593 (May 1992)Google Scholar
  10. 10.
    Yagi, Y., Kawato, S., Tsuji, S.: Real-time omnidirectional image sensor (copis) for vision-guided navigation. IEEE Trans. Robot. Autom. 10(1), 11–22 (1994)CrossRefGoogle Scholar
  11. 11.
    Brosnan, T., Sun, D.W.: Improving quality inspection of food products by computer vision—a review. J. Food Eng. 61(1), 3–16 (2004)CrossRefGoogle Scholar
  12. 12.
    Sharp, C., Shakernia, O., Sastry, S.: A vision system for landing an unmanned aerial vehicle. In: IEEE International Conference on Robotics and Automation. Proceedings 2001 ICRA, vol. 2, pp. 1720–1727 (2001)Google Scholar
  13. 13.
    Zieliński, C., Kornuta, T., Winiarski, T.: A systematic method of designing control systems for service and field robots. In: 19-th IEEE International Conference on Methods and Models in Automation and Robotics, MMAR’2014, pp. 1–14. IEEEGoogle Scholar
  14. 14.
    Kasprzak, W., Kornuta, T., Zieliński, C.: A virtual receptor in a robot control framework. In: Recent Advances in Automation, Robotics and Measuring Techniques. Advances in Intelligent Systems and Computing (AISC). Springer (2014)Google Scholar
  15. 15.
    Staniak, M., Winiarski, T., Zieliński, C.: Parallel visual-force control. In: Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS ’08 (2008)Google Scholar
  16. 16.
    Canny, J.: A computational approach to edge detection. IEEE Trans. Pattern Anal. Mach. Intell. (6), 679–698 (1986)Google Scholar
  17. 17.
    Kiryati, N., Eldar, Y., Bruckstein, A.M.: A probabilistic hough transform. Pattern Recognit. 24(4), 303–316 (1991)CrossRefMathSciNetGoogle Scholar
  18. 18.
    Mikolajczyk, K., Schmid, C.: An affine invariant interest point detector. In: Computer Vision—ECCV 2002, pp. 128–142. Springer (2002)Google Scholar
  19. 19.
    Udrea, R.M., Vizireanu, N.: Iterative generalization of morphological skeleton. J. Electron. Imaging 16(1), 010501–010501 (2007)CrossRefGoogle Scholar
  20. 20.
    Suzuki, S., et al.: Topological structural analysis of digitized binary images by border following. Comput. Vis., Graph., Image Process. 30(1), 32–46 (1985)CrossRefzbMATHGoogle Scholar
  21. 21.
    Quan, L., Lan, Z.: Linear n-point camera pose determination. IEEE Trans. Pattern Anal. Mach. Intell. 21(8), 774–780 (1999)CrossRefGoogle Scholar
  22. 22.
    Walcki, M., Banachowicz, K., Winiarski, T.: Research oriented motor controllers for robotic applications. In: Kozłowski, K. (ed.) Robot Motion and Control 2011 (LNCiS) Lecture Notes in Control & Information Sciences, vol. 422, pp. 193–203. Springer Verlag London Limited (2012)Google Scholar
  23. 23.
    Winiarski, T., Banachowicz, K.: Automated generation of component system for the calibration of the service robot kinematic parameters. In: 20th IEEE International Conference on Methods and Models in Automation and Robotics, MMAR’2015. IEEE (2015)Google Scholar
  24. 24.
    Quigley, M., Conley, K., Gerkey, B., Faust, J., Foote, T., Leibs, J., Wheeler, R., Ng, A.Y.: ROS: an open-source robot operating system. In: ICRA Workshop on Open Source Software, vol. 3 (2009)Google Scholar
  25. 25.
    Bruyninckx, H., Soetens, P., Koninckx, B.: The real-time motion control core of the orocos project. In: IEEE International Conference on Robotics and Automation. Proceedings. ICRA ’03, vol. 2, pp. 2766–2771 (Sept 2003)Google Scholar
  26. 26.
    Stefańczyk, M., Kornuta, T.: Handling of asynchronous data flow in robot perception subsystems. In: Simulation, Modeling, and Programming for Autonomous Robots. Lecture Notes in Computer Science, vol. 8810, pp. 509–520. Springer (2014)Google Scholar

Copyright information

© Springer International Publishing Switzerland 2016

Authors and Affiliations

  1. 1.Warsaw University of TechnologyWarsawPoland

Personalised recommendations