A coarse-to-fine framework for accurate positioning under uncertainties—from autonomous robot to human–robot system

Abstract

Recently, with the growing trend of high-mix, low-volume manufacturing, the demand for better flexibility and autonomy without sacrificing the accuracy of industrial robots and human–robot systems has increased. In this paper, a framework based on a coarse-to-fine strategy for industrial robots and human–robot systems is proposed to push the bounds of machine autonomy and machine flexibility while simultaneously maintaining good accuracy and efficiency. Under the proposed framework, industrial robots and human operators are designated to conduct coarse global motion with the aim of implementing low-bandwidth planning-level intelligence. Simultaneously, fine local motion for tackling accumulated on-line uncertainties is realized by an add-on robotic module with the role of implementing high-bandwidth action-level intelligence. Consequently, the overall system for both applications provides good adaptability to uncertain work conditions, while concurrently realizing fast and accurate positioning. A contour following task in two dimensions, simulating simplified tasks in industrial applications (e.g., sealant application, inspection, welding), was implemented and evaluated by autonomous robot control and human–robot collaboration.

This is a preview of subscription content, log in to check access.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13

Abbreviations

2D :

Two dimensions

3D :

Three dimensions

CAD :

Computer-aided design

CAM :

Computer-aided manufacturing

DoF :

Degrees of freedom

PD :

Proportional-derivative

ROI :

Region of interest

TCP :

tool-center-point

VGA :

Video graphics array

α :

Parameter deciding key point extraction

\({\Delta }{J}_{r}^{+}\) :

Uncertain part of \({J}_{r}^{+}\)

𝜖 :

User defined lower bound for qx,qy

γ :

Conversion vector

\(\hat {{\sigma }}\) :

Estimation of σ by high-speed sensor

ω :

Positive-definite coefficient matrix

δ :

Projected uncertainty in image space of global visual feedback

ϕ :

Image feature of locally configured high-speed camera

Σ:

Bound of accumulated positioning error

σ :

Accumulated positioning error

τ :

Control reference under PD control

\({\tau }^{\prime }\) :

Control reference under PD and coordination control

𝜃 m :

Joint vector of the main subsystem

ξ :

Image feature of global visual feedback

e :

Image error in ξ

J c :

Jacobian of the add-on module (from joint space to image space of the global vision)

\({J}_{c}^{\prime }\) :

Jacobian of the add-on module (from joint space to image space of the locally configured high-speed vision)

J r :

Jacobian of the main subsystem (from joint space to image space of the global vision)

\({J}_{r}^{+}\) :

Pseudo-inverse of Jr

Kp,Kv :

coefficients for proportional and derivative terms

P,Γ:

Two symmetric positive-definite matrices

q :

Joint vector of the add-on module

B :

User defined bound for Cf

C f :

Balancing coefficient

qx,qy :

Joint angles of the add-on module’s joint-x and joint-y respectively

sd :

Standard deviation

References

  1. 1.

    Jain A, Chan F, Singh S (2013) A review on manufacturing flexibility. Int J Prod Res 51:5946–5970

    Article  Google Scholar 

  2. 2.

    Maeda Y, Nakamura T (2015) View-based teaching/playback for robotic manipulation. ROBOMECH J 2:1–12

    Article  Google Scholar 

  3. 3.

    Gu J, Silva de C (2004) Development and implementation of a real-time open-architecture control system for industrial robot systems. Engineering Applications of Artificial Intelligence 17:469–483

    Article  Google Scholar 

  4. 4.

    Asakawa N, Takeuchi Y (1997) Teachingless spray-painting of sculptured surface by an industrial robot. In: Pro. Int. Conf. on robotics and automation. PP. 1875–1879

  5. 5.

    Chen H, Xi N, Kamran S, Chen Y, Dahl J (2004) Development of automated chopper gun trajectory planning. Industrial Robot: An International Journal 31(3):297–307

    Article  Google Scholar 

  6. 6.

    Hayati S (1985) Improving the absolute positioning accuracy of robot manipulators. Journal of Robotic Systems 2(4):397–413

    Article  Google Scholar 

  7. 7.

    Wang J, Zhang H, Fuhlbrigge T (2009) Improving machining accuracy with robot deformation compensation. In: Pro. Int. Conf. on intelligent robots and systems. PP. 3826–3831

  8. 8.

    Brink K, Olsson M, Bolmsjö G (1997) Increased autonomy in industrial robotic systems: a framework. Journal of Intelligent and Robotic Systems 19:357–373

    Article  Google Scholar 

  9. 9.

    Johansson R, Robertsson A, Nilsson K, Cederberg P, Olsson M, Olsson T, Bolmsjö G (2004) Sensor integration in task-level programming and industrial robotic task execution control. Industrial Robot: An International Journal 31(3):284–296

    Article  Google Scholar 

  10. 10.

    Mendes N, Neto P, Pires JN, Loureiro A (2013) An optimal fuzzy-PI force/motion controller to increase industrial robot autonomy. Int J Adv Manuf Technol 68:435–441

    Article  Google Scholar 

  11. 11.

    Mei B, Zhu W, Dong H, Ke Y (2015) Coordination error control for accurate positioning in movable robotic drilling. Assembly Automation 35(4):329–340

    Article  Google Scholar 

  12. 12.

    Shirinzadeh B, Teoh PL, Tian Y, Dalvand MM, Zhong Y, Liaw HC (2010) Laser interferometry-based guidance methodology for high precision positioning of mechanisms and robots. Robotics and Computer-Integrated Manufacturing 26:74–82

    Article  Google Scholar 

  13. 13.

    Sharon A, Hogan N, Hardt E (1993) The macro/micro manipulator: an improved architecture for robot control. Robot Comput-Integr Manuf 10:209–222

    Article  Google Scholar 

  14. 14.

    Lew J, Trudnowski D (1996) Vibration control of a micro/macro-manipulator system. IEEE Comput Sci Eng 16:26–31

    Google Scholar 

  15. 15.

    Omari A, Ming A, Nakamura S, Kanamori C, Kajitani M (2001) Development of a high-precision mounting robot with fine motion mechanism (3rd Report) - positioning experiment of SCARA robot with fine mechanism. The Japan Society for Precision Engineering 67:1101–1107

    Article  Google Scholar 

  16. 16.

    Freundt M, Brecher C, Wenzel C (2008) Hybrid universal handling systems for micro component assembly. Microsyst Technol 14:1855–1860

    Article  Google Scholar 

  17. 17.

    Sulzer J, Kovac I (2010) Enhancement of positioning accuracy of industrial robots with a reconfigurable fine-positioning module. Precision Engineering 34:201–207

    Article  Google Scholar 

  18. 18.

    Schneider U, Olofsson B, Sornmo O, Drust M, Robertsson A, Hagele M, Johansson R (2014) Integrated approach to robotic machining with macro/micro-actuation. Robot Comput-Integr Manuf 30:636–647

    Article  Google Scholar 

  19. 19.

    Cheah C, Hirano M, Kawamura S, Arimoto S (2003) Approximate jacobian control for robots with uncertain kinematics and dynamics. IEEE Trans Robot Autom 19:692–702

    Article  Google Scholar 

  20. 20.

    Wang H, Liu Y, Zhou D (2008) Adaptive visual servoing using point and line features with an uncalibrated eye-in-hand camera. IEEE Trans Robot 24:843–856

    Article  Google Scholar 

  21. 21.

    Bauchspiess A, Alfaro S, Dobrzanski L (2001) Predictive sensor guided robotic manipulators in automated welding cells. J Mater Process Technol 109:13–19

    Article  Google Scholar 

  22. 22.

    Lange F, Hirzinger G (2003) Predictive visual tracking of lines by industrial robots. Int J Rob Res 22:889–903

    Article  Google Scholar 

  23. 23.

    Levine S, Pastor P, Krizhevsky A, Ibarz J, Quillen D (2018) Learning hand-eye coordination for robotic grasping with deep learning and large-scale data collection. Int J Rob Res 37:421– 436

    Article  Google Scholar 

  24. 24.

    Tanaka Y, Kinugawa J, Sugahara Y, Kosuge K (2012) Motion planning with worker’s trajectory prediction for assembly task partner robot. In: IEEE/RSJ Int. Conf. Intelligent robots and systems (IROS), pp. 1525–1532

  25. 25.

    Tan J, Duan F, Kato R, Arai T (2010) Safety strategy for human-robot collaboration: design and development in cellular manufacturing. Advanced Robotics 24:839–860

    Article  Google Scholar 

  26. 26.

    Cherubini A, Passama R, Crosnier A, Lasnier A, Fraisse F (2016) Collaborative manufacturing with physical human-robot interaction. Robot Comput-Integr Manuf 40:1–13

    Article  Google Scholar 

  27. 27.

    Kazerooni H (1990) Human-robot interaction via the transfer of power and information signals. IEEE Trans Syst Man Cybern 20(2):450–463

    Article  Google Scholar 

  28. 28.

    Huang S, Bergström N, Yamakawa Y, Senoo T, Ishikawa M (2017) Robotic contour tracing with high-speed vision and force-torque sensing based on dynamic compensation scheme. IFAC-PapersOnLine 50 (1):4616–4622

    Article  Google Scholar 

  29. 29.

    Huang S, Shinya K, Bergström N, Yamakawa Y, Yamazaki T, Ishikawa M (2018) Dynamic compensation robot with a new high-speed vision system for flexible manufacturing. Int J Adv Manuf Technol 95:4523–4533

    Article  Google Scholar 

  30. 30.

    Yamazaki T, Katayama H, Uehara S, Nose A, Kobayashi M, Shida S, Odahara M, Takamiya K, Hisamatsu Y, Matsumoto S, Miyashita L, Watanabe Y, Izawa T, Muramatsu Y, Ishikawa M (2017) A 1ms high-speed vision chip with 3d-stacked 140gops column-parallel pes for spatiotemporal image processing. In: Pro. Int. Solid-state circuits conf. pp. 82–83

  31. 31.

    Baeten J, Schutter JD (2002) Hybrid vision/force control at corners in planar robotic-contour following. IEEE/ASME Trans Mechatronics 7:143–151

    Article  Google Scholar 

  32. 32.

    http://ishikawa-vision.org/fusion/tracing/contourtracing.mp4, Accessed on 10 Dec. 2019.

Download references

Author information

Affiliations

Authors

Corresponding author

Correspondence to Shouren Huang.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix: Key point extraction algorithm

Appendix: Key point extraction algorithm

The key point extraction algorithm for coarse motion planning of main robot as illustrated in Section 5.2 was proposed in our previous works [29]. For the sake of easy reading, we recall the method for the extraction of key points of an arbitrary contour as follows:

  1. 1.

    As shown in Fig. 14a, the image is binarized with a proper threshold, and a start point p0 on the target contour is determined with the nearest distance to a user-predefined point. pc is initialized to p0.

  2. 2.

    A probing circle with its center at pc is used to detect the intersection pd with the target contour by a predefined extraction direction.

  3. 3.

    Point pi, starting from pc along the extraction direction with a small step size s, is examined. If its distance to chord \(\overrightarrow { p_{c} p_{d}}\) is smaller than a parameter α (α < Lmax with Lmax is the maximum work range of the compensation module in image space of the VGA camera), we continue to examine the next point by incrementing another step size s. Otherwise, pi is elected as the new extraction point pn. With the insertion of the new point pn, points between pc and pn should be re-examined to see if the distance from pj to chord \(\overrightarrow { p_{c} p_{n}}\) is greater than α. If true, another new extraction point at pj should be inserted, and a recursive check for points between pc and pj should be conducted. Until all points between pc and pn are secured (the distance from each point to the corresponding chord is smaller than α), the probing circle moves by updating pc with pn. Then, the algorithm returns to the previous step until all the discretization points of the target contour are visited. The distance from pi to chord \(\overrightarrow { p_{c} p_{d}}\) is represented as D and is calculated by

    $$ \begin{array}{@{}rcl@{}} D = \frac{| \overrightarrow{ p_{c} p_{i}} \times \overrightarrow{ p_{c} p_{d}} |}{| \overrightarrow{ p_{c} p_{d}} |}. \end{array} $$
    (17)
  4. 4.

    Points [p0,p1,...,pn] are the key points extracted from the target contour.

Fig. 14
figure14

Coarse motion planning of the main robot. a Method for extraction of key points. b Case with α = 2.52. c Case with α = 1.56

It should be noted that we can adjust the density of the key points by changing the parameter α. As shown in Fig. 14 b and c, fewer key points were extracted with a relatively large α, and a smaller α generated more key points. Usually, a commercial robot controller enables different methods of on-line path generation with selected key points. An example, shown in Fig. 14a, demonstrates the point-to-point (P2P) method that generates a path strictly passing through all key points with non-constant velocity. On the other hand, the smooth path (100% smoothing factor) method achieves a constant velocity profile where the motion trajectory is not known to the user in advance. Contour following with constant speed achieves good energy efficiency by reducing unnecessary acceleration and deceleration. In this study, the smooth path (100%) method was adopted to control the main robot. However, this introduces an additional source of uncertainty in the main robot’s trajectory.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Huang, S., Ishikawa, M. & Yamakawa, Y. A coarse-to-fine framework for accurate positioning under uncertainties—from autonomous robot to human–robot system. Int J Adv Manuf Technol 108, 2929–2944 (2020). https://doi.org/10.1007/s00170-020-05376-w

Download citation

Keywords

  • Intelligence architecture
  • Industrial robot
  • Autonomous robot control
  • Human–robot collaboration
  • High-speed vision