From Teleoperation to Autonomous Robot-assisted Microsurgery: A Survey

Robot-assisted microsurgery (RAMS) has many benefits compared to traditional microsurgery. Microsurgical platforms with advanced control strategies, high-quality micro-imaging modalities and micro-sensing systems are worth developing to further enhance the clinical outcomes of RAMS. Within only a few decades, microsurgical robotics has evolved into a rapidly developing research field with increasing attention all over the world. Despite the appreciated benefits, significant challenges remain to be solved. In this review paper, the emerging concepts and achievements of RAMS will be presented. We introduce the development tendency of RAMS from teleoperation to autonomous systems. We highlight the upcoming new research opportunities that require joint efforts from both clinicians and engineers to pursue further outcomes for RAMS in years to come.


Introduction
Microsurgery requires manipulating delicate tissue or fragile structures such as small blood vessels, nerves, and tubes through a microscope [1] . The operation accuracy of humans' hand is about 0.1 mm under optimal conditions [2] , which makes microsurgical operation challenging [3] . Physiological tremors or high-frequency involuntary hand movements with amplitude over 100 mm may influence safety during microsurgical operations [4] . Moreover, the poor sensory feedback in the microscale poses extract challenges to the manoeuvres in confined environments [5] . This indicates that the ability to measure the microscale interaction forces between the microsurgical tools and the objects is significantly important [6] . Other challenges come from the mapping strategies for teleoperation [7,8] , the limited region of interest provided by the microscope [9] , the obstructed view when operating in complex structures [10] , and potential collisions between instruments and delicate tissue regions.
To address the challenges mentioned above, the last decade has seen emerging technologies that assist robotic surgery in terms of imaging, sensing and robotics [11,12] .
Many great efforts have been made to develop robot-assisted microsurgical platforms [13] . To ensure that no extra force is imposed on the targeted operating area, embedded micro-sensing systems for microsurgical tools are needed [14] . To protect the fragile micro components by uncontrollable exerted force, virtual fixtures are employed to avoid damage when approaching forbidden regions. Tremor removal techniques are utilized on the robotic platforms to ensure the reliability of the robotic system for microsurgery [15−17] . Micro-imaging techniques at the cellular level expand the capabilities of microsurgical operations by providing accurate and effective guidance. Fig. 1 shows several key milestones for robot-assisted microsurgery (RAMS). One of the first microsurgical manipulators for eye surgery was developed at Northwestern University [18] , following which the steady-hand concept was proposed by Taylor et al. [19] , targeted at submillimeter manipulation. This concept was further developed into the cooperatively controlled mode, in which the robot can assist the operator in manipulating tissue within defined force limits [20,21] . Followed by this tendency, the cooperative control paradigm is widely used in the robotic platforms for ocular surgery, where the surgical tool is held simultaneously both by a robotic arm and the surgeon's hand during the surgical operation. Robotic system "A 73" was developed for paranasal sinus surgery [22] , which was tested on macerated cadaveric heads for validation.
In addition to the grounded robotic systems mentioned above, microsurgical handheld devices, also known as ungrounded robotic devices, are developed for microsurgery. Handheld devices are better alternatives in some scenarios since they are compact, lightweight and easy-to-use, compared to their grounded counterpart. For example, Micron, a well-known handheld robotic device for microsurgery, can sense dynamic movements using accelerometers, and filter the tremor to provide stable operation. The end-effector of the robot is controlled by a parallel robot consisting of three piezoelectric stacks [23,28] .
As for clinical translation, the world's first robot-assisted eye surgery was performed by surgeons at Oxford's John Radcliffe Hospital for a membrane peeling operation using the teleoperated preceyes surgical system [29] . It was featured in motion-scaling and tremor-suppressing functions. The robotic ENT microsurgery system (REMS) is a commercial robotic platform, which was built specifically for otolaryngology-head and neck surgery (OHNS) [26] . Following that, the Bern University robot has shown promising results for ENT surgery in clinical translation [27,30,31] .
In the microscale, the interaction between robots and humans has become much more challenging. In order to reduce the fatigue for surgical operation, provide greater dexterity during manipulation, avoid hand tremors and conduct more precise surgical tasks, human-robot shared control and robot learning techniques are worth developing to support RAMS. With a higher level of autonomy, surgeons can focus on the crucial and complex parts of surgical procedures while the repetitive and tedious work can be done by robots. Moreover, a robot with higher intelligence may lead to a better operation quality, since the hand tremors can be removed while the precision for operation can be enhanced.
Microsurgical robots and instruments are developing towards safer and smarter for wider clinical uptake [32] . A framework for six levels of autonomy for medical robotics has been proposed in [33], including no autonomy (Level 0), robot assistance (Level 1), task autonomy (Level 2), conditional autonomy (Level 3), high autonomy (Level 4), full autonomy (Level 5). Since there are remaining issues and open challenges for safety, ethics, regulation of medical robots with high autonomy, we mainly focus on microsurgical robots with autonomy level from 0 to 3. To this end, we will introduce the development of RAMS from teleoperation (no autonomy) to conditional autonomy with the support of advanced AI techniques including learning from demonstration (LfD) and reinforcement learning (RL).
Recently, various data-driven techniques such as imitation learning [34] , RL [35] , transfer learning [36] and deep learning [34] , etc., have been successfully employed for robotic skill learning and generalizing. The comparison and discussion of different machining learning methods for robot manipulation skill learning are extensively surveyed in the review paper [37] . By incorporating robot learning techniques into RAMS, the microsurgical systems can Fig. 1 Some of the key milestones in robot-assisted microsurgery: (a) Steady-hand robot [19] ; (b) Micron [23] ; (c) " A 73" [22] ; (d) New steady-hand robot [20] ; (e) Preceyes [24,25] ; (f) REMS [26] ; (g) Bern University robot [27] . This review article is organized as follows. Section 1 is the introduction of the motivation and the key milestones of the development of RAMS. The current robotic platforms for different types of microsurgery are briefly introduced in Section 2. Section 3 describes available imaging modalities for microsurgery and microscale sensors that are suitable to be incorporated into microsurgical tools. Grounded robotic systems for microsurgery are described in Section 4, while ungrounded handheld smart surgical devices are reviewed in Section 5, including force control and haptic feedback devices, tremor suppression and image-guided devices. Section 6 summarizes the existing robot learning techniques that have a high potential for deployment in RAMS and introduce the future outlook for intelligent microsurgical robotics. Finally, conclusions are drawn and the evolution tendency is presented in Section 7.

Overview
Ophthalmology, otology, rhinology and laryngology are typical types of microsurgery. In this section, we illustrate the characteristics and challenges of different types of microsurgery, which reveals the benefits provided by the RAMS and numerous opportunities in this research area. We also introduce microsurgical skill training using robotic platforms, which is indispensable for the development of RAMS.

Ophthalmology
Several pathologies for ophthalmologic practices include retinal detachment, vitreous hemorrhage, macular pucker, macular hole and diabetic retinopathy [38] . Some surgical procedures for vitreoretinal surgery such as membranes lifting, retinal tears repair, and blood vessels cannulation are included in eye surgery, which involves inserting instruments into the eye through trocars on the sclera. Precise manipulations are needed for ophthalmologists to avoid collateral damage during the manipulation of vitreoretinal structures [39] . Therefore, micro-manipulation techniques are significant for eye surgery which has high requirements for operation accuracy.
The original Johns Hopkins University (JHU) steadyhand robot (SHR) [19] was developed in 1999 for ocular surgery. However, it was a bulky system and was ergonomically inconvenient for the surgeon. Later, JHU Eye Robot 1 (ER1) [38] was built with a 3D translation stage, while JHU Eye Robot 2 (ER2) had a similar 3 degree-offreedoms (DoFs) translation stage with a linkage in the wrist mechanism [21] . More detailed reviews for robot-assisted ophthalmology can be found in [40].

Otology
Otology is concerned with the study of the ear and its diseases, diagnosis and treatment. The first robotic ENT surgery can date back to 1995, when a robot was developed for automatic micro-drilling for stapedotomy [41,42] . After that, emerging robotic platforms were developed to push forward the development of cochlear implant surgery, which is a kind of inner ear surgery.
Cochlear implantation is promising for severe hearing impairment to obtain auditory aids. Cochleostomy is required before electrode insertion. The whole cochlear implant operation includes mastoidectomy and the insertion of the electrode array into the spiral-shaped cochlea [43] . To reduce the insertion force during cochlear implant insertion, robot-assisted steerable electrode array insertions were developed [44] , while Bern University robot (BUR) [27] was an advanced platform for cochlear implantation developed recently.
Middle ear surgery is another type of otology that demands robotic assistance, which includes mastoidectomy stapedectomy and cholesteatoma surgery. Malleus, incus, stapes, other structures of small sizes, tool rigidity and limited field of view (FOV) make middle ear interventions challenging [45] . Surgery in the middle ear requires delicate movements to ensure safety for operation in a confined environment with sensitive structures.

Rhinology
Nasal septum surgery and Sinus surgery are typical in rhinology. They are challenging due to the difficult access of micro-telescopes and the complex nasal passages. Endonasal skull base surgery, developed in the 1970s, is routinely targeted at treating pituitary and other skull base tumours, endonasal tumours, and chronic sinusitis refractory.
Important structures for rhinology include the pituitary gland, cranial nerves, carotid arteries, and other nasal structures [46] , which indicates that safety for microsurgical operation is of significant importance. The RV-1a articulating arms robot has been designed for robotic paranasal sinus and skull base surgery. It uses redundant navigation and automated registration to improve intraoperative safety [47] .

Laryngology
Trans-oral laryngeal surgery is targeted at combating laryngeal cancer [48] , or dealing with benign nodules, polyps, cysts, and laryngeal papilloma within vocal cord pathologies. Robotic supraglottic laryngectomy was first reported in [49]. There are other potential applications such as tonsillectomy for tonsils and adenoids removal, oropharyngeal and supraglottic surgeries.
A teleoperated robot with enhanced distal dexterity and accuracy was developed for throat and upper airways surgery [50] . It can be used for suturing inside the larynx and can support the functional reconstruction of tissue [50] . The snake-like unit design incorporates several flexible tubular backbones.

Microsurgical skill training
A microsurgical robot research platform (MRRP) has been developed as a versatile microsurgical skill training and research platform to support RAMS. Similar to da Vinci Research Kit which aims to support research in the field of telerobotic surgery [51−53] , MRRP is developed based on robot operating system and can be easily integrated with different high-level control algorithms [54] . In addition to research, MRRP can be used for microsurgical skill training. For example, deep learning based method has been developed for automatic microsurgical skill training using kinematic and vision data obtained from MRRP during microsurgical operation [55] .

Overview
Imaging and sensing systems are indispensable for RAMS. The image quality has a close relationship with the construction of accurate 3D surface models of patients' anatomy, the detection of the disease location [56] . The detection of microstructures throughout the intraoperative phase is important for further decision-making and operation [56] . Moreover, microsurgery is normally targeted at delicate and precise tissue manipulation. The measurement of the microscale interaction forces between the manipulator and the tissue is required for microsurgery to provide force feedback to the operators [6] . This leads to the development of microscale sensing techniques for RAMS [57,58] .
In this section, available medical imaging modalities and microscale sensors for integration to microsurgical tools are introduced.

Imaging techniques for microsurgery
High-resolution computerized tomography and magnetic resonance imaging are known as standard imaging techniques for preoperative and postoperative microsurgery. Ultrasound imaging, fluorescence imaging, optical coherence tomography, confocal laser endomicroscopy and other types of imaging systems can be used during the intervention in order to perform a real-time, reliable, in-vivo diagnosis.

Ultrasound biomicroscopy
Ultrasound biomicroscopy has been widely used in intravascular, abdominal, gynecologic, and many other medical applications [59] . A high-frequency ultrasound system for imaging the auditory system was proposed [60] . The generated images demonstrate that high-frequency ultrasound imaging can provide valuable diagnostic features. A force-assisted ultrasound imaging system has been developed through dual force sensing and admittance robot control [61] .

Fluorescence imaging
Fluorescence imaging [62,63] , based on distinguishing the wavelength of the light emitted by different components, could be applied during surgery to locate targeted cells.

Optical coherence tomography (OCT)
OCT has been developed as a dominant diagnostic imaging modality for ophthalmology, cardiology, and otology [64] . It has better resolution than ultrasound [65,66] and fluorescence technique [67] for soft tissues imaging, and has been used for imaging various features in the middle ear [68] . Moreover, it serves as a potential sensor for intraoperative tool control [69−71] . A common-path swept-source OCT probe was used for the robot-assisted 3D registration for cochlear implant surgery [72] .

Probe-based confocal laser endomicroscopy
(pCLE) Advances in pCLE offer real-time cellular level information for invivo tissue characterization [73] . It could help enhance the surgeon's ability to localize pathology both pre-operatively and intra-operatively [74] . These advanced imaging modalities enhance both pre-operative and intra-operative pathology localization ability of the surgeon.

Sensing techniques for microsurgery
Sensing technologies are significant for the microsurgery. In [75,76], tool-tissue forces were incorporated into a cooperative control loop. Sensing elements were built into the handle of the surgical instrument to measure tool to tissue interaction forces [77] .

Fiber bragg grating strain sensors
Fiber bragg grating (FBG) strain sensors, which are robust optical fibers that can detect fine changes in strain, have attained a rising interest in various medical applications and are good candidates for force sensing [78] . FBG sensors have been incorporated into a 1 DoF forcesensing tool [79] and then a 2 DoFs pick-like instrument [80] . Other examples that utilized FBG based micro-forceps for vitreoretinal surgery force feedback can be found in [81,82].

Optical sensors
Optical sensors can ensure the measurement accuracy and provide extremely high resolution. Optical sensors were used to estimate the force exerted by each steel mechanical micro-manipulator [83] . Optical displacement sensors were used in [84] to ensure both position and force control.

Other types of sensors
There have been numerous studies carried out for measuring and quantifying tool-tissue interaction forces in microsurgery. For example, the semiconductor strain gauges [85,86] , microelectromechanical systems (MEMS)based diffractive optical encoders [87] , fiber optic sensing [88−90] , Poly-vinyli-dene-fluoride (PVDF) force sensing system [91,92] have been used for RAMS. A force measurement system of blood vessel gripping by hydraulicdriven forceps was designed in [93]. Puncture force and cutting depth can be measured with microsensors that were integrated on the robotic end-effector for ocular surgery [94] .
Other sensing systems based on monolithic structure flexure and photo-sensors were introduced in [95]. Additionally, hybrid sensor systems have been utilized to incorporate feedback in the control loop [96] .

Overview
Grounded robotic systems for microsurgery can be divided into master-slave teleoperation systems and collaborative control systems. Master-slave control, also known as leader-follower control or teleoperation, is a common form of control for robotic surgery [97] . The motions of the operator are captured by the master devices and replicated by the slave robot. As for cooperative control, the surgeon and the robot held the surgical tool together for surgical operation. The force exerted by surgeon on the tool can be sensed through a force sensor, while the tool can move accordingly [98] . The motions of the robot comply proportional to the user exerting forces, which can ensure safety.
In this section, grounded robotic systems for microsurgery are introduced. Some typical grounded microsurgical robots for RAMS and a microsurgical research platform is shown in Fig. 2. Table 1 summarizes the groun-ded microsurgical robotic platforms introduced in this review paper.

Teleoperation
The most common approach for operating microsurgical robots is master-slave control [53] , which allows the surgeon to stay outside the operating room.
A microsurgical teleoperated robot (MSR-1) has been built for ocular surgery [106] in virtual environment. Other examples include the NASA Jet Propulsion Laboratory (JPL) RAMS system [107] , the multi-arm stabilizing micromanipulator [108] , the Japanese occular robot of Ueta [102] , and the Preceyes [103] . As for the ENT surgery, the Bern University robot [27] has shown promising results. "Ro-bOtol" is a teleoperated robotic system for stapedectomy surgery through the external ear canal [100] . It is composed of a slave robotic arm and a master joystick. The design was presented in [109], while the experiments were conducted on an external auditory meatus in human temporal bone specimens to demonstrate its potential applications in the surgery of otosclerosis [100] . Other teleoperated microsurgical robots for different types of surgery have been introduced in Section 2.
Teleoperated systems can filter out hand tremors, scale down hand motion, and magnify tool forces as displayed to the surgeon, and provide a comfortable setup for surgeons with ergonomic consideration. However, the teleoperation paradigm is criticized by the complexity and the cost-effectiveness, since two robot systems are required [110] .

Cooperative control
In a cooperative control system, the operator guides the end-effector of a robot arm using an attached grip that senses the force applied by the hand. The arm's passive stiffness prevents the tool position from perturbation caused by tremors [28] .
The first surgical collaborative robot was reported in early 90s in last century as the concept of the human extenders, which aims to augment human capability and  [21] ; (b) A new cooperatively controlled ENT microsurgery robot performing transoral surgery [99] ; (c) Micromanipulator system (MMS-II) is a lightweight and user-friendly system for middle ear surgery [45] ; (d) A microsurgical robot research platform [13] . Panel strength. It consisted of a hydraulic 6 DoF extender. The system had force sensing capabilities to measure the human manipulator imposed forces. It includes a force compensation system [111] . Inspired by this system, other groups started to use this concept in surgical robotic systems.
The steady-hand robotic systems have been developing for two decades since their first appearance in 1999. It utilized the inherent stiffness of the robot arm to damp high-frequency movement to cancel tremors. By tuning the parameters of gains in the cooperative controller, the sensitivity of the robot can be modulated [112] . Most of the current robotic platforms utilizing the cooperative control mode are for ocular surgery.
The latest steady-hand robot [20] was an assisted robot with a positioning resolution of 5-10 microns. Based on the cooperatively controlled concept [20] , it stabilizes and precis the motion by utilizing its stiff structure and nonback-drivable actuators with high-resolution encoders. The experiment was related to vein cannulation using a chorioallantoic membrane of chicken embryos. However, the system failed to provide tool-tissue interaction force feedback to the surgeon.
A new cooperatively controlled bimanual robot was developed and was readily adapted to other forms of head-and-neck surgery. The linear stage was constructed using a 3 DoFs parallel design, which can enhance the accuracy and stiffness of the whole system without increasing the size of the device significantly. However, this parallel mechanism had limited linear range of motion [112] . Robotic vitreous retinal microsurgery system (RVRMS) has been proved to be effective for the vitreoretinal operations [106] . It can be regarded as a cooperative robot assistant for vitreoretinal microsurgery.
In addition to tremor removal and precise operation, new mechanisms or structures are explored to further improve RAMS. Snake-like robots have significant advantages to operate in the confined environment [105] , and are capturing more and more attention in recent years.
The advantage of cooperative control is that the sur-geon can directly grasp the tool, preserving this familiar interface. However, due to the rigid connection between input and output, cooperative manipulation is difficult to support motion scaling and has a limited workspace. If the motion artifact is beyond a certain level, the device cannot fully compensate the undesired motions since the actuators may saturate.

Overview
Ungrounded handheld smart devices can be easily integrated into the surgical workflow without disrupting the traditional procedures of operation [113] . Handheld robots assist the operating surgeon in a more natural way without motion constraints since no mechanical linkages are involved. Force control and haptic feedback devices, tremor suppression and image-guided devices have been explored to demonstrate the emerging trends and opportunities for handheld robots for microsurgery.
In this section, we introduce ungrounded handheld devices for microsurgery. Some typical types of ungrounded handheld devices for microsurgery are shown in Fig. 3. Table 2 summarizes the key features of the ungrounded microsurgical robotic devices introduced in this review paper.

Devices with force feedback
In general, surgeons receive visual feedback through stereo-microscopes. Position control systems that solely based on visual feedback were proposed in [122]. Visual cues related to tissue deformation are important feedback to operators, but they would be inadequate when facing more complex and delicate surgical procedures. Haptic feedback in the control loop is important, especially for human-in-the-loop teleoperation systems [123] . In order to provide reliable force control and haptic feedback, force sensors have been explored to be integrated into handheld devices and provide the surgeon with augmented feedback when the tools touch the tissue [124−126] .
A sub-millimetric fiber-optic force sensor, with 0.25 mN resolution, was integrated into a tool for retinal microsurgery [80] . Instead of mounting the sensing system on the handle, force sensing elements were built into the shaft of the instrument inside the anatomical structure [80] . FBG sensors have been incorporated to 1 DoF force-sensing tool [79] and in a 2 DoFs pick-like instrument [80] . Other examples that utilized FBG based micro-forceps for vitreoretinal surgery force feedback could be found in [81,82]. A 3 DoFs force-sensing micro-forceps for robot-assisted membrane peeling was developed to sense the tissue pulling forces along the forceps axis. The external tool-totissue interactions and the adverse effect of intrinsic actuation forces that arise due to the elastic deformation of jaws and friction were also considered [127] . Other devices provide the surgeon with the force-feedback using vision techniques instead of force sensors [128] . Augmented reality was employed to highlight the area where the applied force exceeds limitation by a microsurgical hand-held tool [129] , which can be known as one of the visual force-feedback techniques.

Devices with active constraints
Active constraints, also known as virtual fixtures, can help reduce surgeons' cognitive workload. With active constraints, the robot end-effectors can be constrained to pre-defined areas, orientations, or behaviours for increased safety [130] . They can be regarded as a basic form of haptic interaction between handheld robots and operators. Acrobot is known as the first robot that included the concept of virtual fixtures [131] .
Some handheld surgical devices incorporated forcefeedback based on active constraints [132] . Micron is a compact handheld tool that actuates its own tip to cancel tremor [28,133] , which gives the operator a sense of natural operation. Virtual fixtures concept was not only utilized in steady-hand manipulation [134,135] , but also integrated to the Micron in [136] to improve the vision-based control accuracy with dense stereo vision.
Returning touch and force information is a current de-veloping trend for RAMS. Active constraints or virtual fixtures can help ensure safety, which are worth developing. Moreover, direct force and tactile feedback are important for clinicians during surgery to avoid tissue injury of suture breakage, and should be incorporated to microsurgical tools or platforms.

Devices with tactile display
Capacitive force sensors have been attached to the tooltip of the handheld microsurgical devices, while dielectric elastomer actuators have been mounted on the position of the finger hold, which could be known as tactile displays for microsurgical devices [137,138] . Tactile feedback was also provided by a smart handheld surgical tool in [117]. A handheld surgical tool with 3 DoFs force sensors and three tactile displays was developed by adopting the capacitive transduction principle [119] . The tactile display was actuated through an electroactive polymer actuator.
Although there are numerous studies related to force and tactile sensing, the dimension constraints as well as biocompatibility, sensitivity and safety considerations limit the existing results to be translated to a real surgical environment. Further efforts are required to bring clinical translation opportunities.

Devices with tremor suppression
Most of the current handheld device can be regarded as freehand active tremor suppression systems, which regulates the contact force against involuntary movements [115, 139−144] .
Existing active handheld instruments for tremor compensation are mainly developed based on piezoelectric actuators [133] . For example, a handheld parallel micromanipulator utilizing six piezoelectric linear actuators was designed and fabricated in [145]. A linear delta manipulator for micromanipulation was developed [120] and was incorporated with tremor suppression functions. Other types of actuators are promising for tremor compensation, such as the ionic polymer metallic composite actuator [146] . Moreover, accelerometers and vision-based methods have  Fig. 3 Some typical handheld microsurgical tools: (a) Illustration of the pCLE force control system [114] ; (b) Illustration of a force amplifying device [115] ; (c) The Craniostar [116] ; (d) A smart handheld tool with tactile display [117] . been integrated into the handheld device to sense the motion of the surgeon [140,147] . The physiological tremors and unwanted motions are later filtered out from the sensed motion [23,139,148,149] .

Devices with image guidance
Poor depth perception through the microscope may cause reduced pointing accuracy during microsurgical operation [28] . Therefore, it is significant to incorporate reliable image guidance to the handheld device and enhance the operation efficiency by enhancing the level of autonomy.
Visual servoing has also been deployed to assist the autonomy of some surgical procedures. Automating the biopsy and delivery therapy has been shown to improve surgical accuracy and consistency of treatments [150] . This technique can help prevent the microsurgical tool to touch sensitive tissues and nerves during operation and ensure safety. Recently, a monocular camera-guided and actively stabilized handheld robot was developed [151] , which was proved to have good performance for retinal vein cannulation.
Micro active forceps with optical fiberscope have been developed for intra-ocular microsurgery [152] . It combined actively-jointed forceps with optical fiber optic illumination. In addition, a bendable endoscope has been developed to visualize the cholesteatoma within the middle ear cavity [153] . Other microsurgical instruments with single fiber OCT were developed to provide image guidance [154] .

Multi-functional handheld devices
Smart micromanipulation aided robotic-surgery tool (SMART), a novel handheld micro-forceps with tremor removal functions, has been developed for microsurgery [118] . A distance and motion sensor was integrated into the shaft of the micro-forceps. Common path, sweptsource optical coherence tomography (CP SS-OCT) along with a piezoelectric motor was utilized for the SMART system [155] .
Based on CP SS-OCT, an active depth-guiding handheld micro-forceps for membranectomy was introduced in [156]. Motion compensation and tool-tip manipulation were separated by using two motors and a touch sensor. A smart motion monitoring and guiding algorithm was incorporated to facilitate intuitive freehand control with enhanced accuracy.

Overview
Among the learning methods, learning from demonstration (LfD) (also named programming by demonstra-tion PbD or imitation learning) is an effective way to transfer skills from humans to robots [157,158] . The effectiveness of LfD has been proved in a number of fields, such as assembly [159] , medical scanning [160] and robot-assisted rehabilitation [161] . Reinforcement learning (RL) enables the robot to learn in an interactive environment by trial and error. It has been used for medical robotics in healthcare systems [162] . We believe that RL can be applied for intelligent microsurgical robots and contribute to microsurgery.
In this section, we introduce robot learning techniques that have a high potential to support RAMS by enhancing the autonomy level of microsurgical platforms or devices.

LfD for autonomous RAMS
According to the means of demonstration, LfD can be implemented by kinesthetic teaching [163] , teleoperation [164,165] , and passive observation [158] . The progress, advantages and applications of LfD have been reviewed in [166].
LfD has already been successfully used in robotic surgery [167] . For example, an LfD approach has been proposed with primitive instantiation to reduce the operation time and provide force feedback in laparoscopic surgery [168] . Su et al. [169] proposed a framework to transfer motion skills from multiple human demonstrations in open surgery to robotic surgery by LfD. LfD has been used to generate trajectories for surgical tasks [170] with different initial conditions. In [171], a novel surgical human-robot collaborative system was developed based on LfD. By using a natural division of tasks into subtasks, this system could automatically identify the completion of a manual subtask and then seamlessly execute the next automated task, finally returned control back to the surgeon. In [172], LfD was used to generate haptic guidance for bimanual surgical tasks. Shin et al. [173] used LfD to boost the learning policy by initializing the predictive dynamics with the given demonstration. In [174], a visionbased LfD approach for multi-robot manipulation was developed to achieve the human-robot skill transfer for surgical tasks. LfD was used to transfer surgical suturing skills from humans to surgical robots [175] . As for application in microsurgery, an automatic surgical tool navigation task has been implemented by robots through LfD for retinal surgery [176] . This trained model could provide tens-of-microns precise accuracy which aimed to streamline complex procedures and reduce the chance for tissue damage during surgical tool navigation.

RL for autonomous RAMS
A novel strategy based on a fuzzy reinforcement learning algorithm has been proposed to better facilitate the preoperative configuration adjustment of the surgical ro-bot [177] . Deep RL was employed for an optimal tensioning policy of a pinch point for minimally invasive robotic surgery [178] . More recently, a discrete reinforcement learningbased approach has been developed to automate the needle hand-off task and collaborative suturing [179] . Moreover, RL has been used for rapid trajectory generation for a bimanual needle regrasping task, which is one of the most challenging sub-task of suturing [180] . To avoid collision between surgical tools and delicate tissue regions in the human body, a collision-avoidance path planning algorithm for the laparoscopic robot was designed by combining probabilistic roadmap and RL methods [181] . Dynamic movement primitives (DMP) has been integrated with RL for autonomous cholecystectomy [182] , which can optimize the trajectory of the surgical robot's end-effector and avoid unwanted contact between the catheter tip and the vessel wall. Considering that the training of RL requires a huge amount of interaction data between agent and environment, a simulation environment is necessary for model training. Especially for surgery robot tasks, data collecting is expensive and even impossible in many cases. Therefore, open-sourced RL environments for surgical robotics, such as dVRL [183] and SurRoL [184] , have been developed to encourage sim-to-real transfer learning research for robotic surgery.

Integration of RL and LfD for autonomous RAMS
LfD and RL algorithms have relative advantages and disadvantages. LfD can acquire useful skills' information from the expert demonstration directly. However, skillful handling of surgical instruments requires a long period of training and depends highly on the experience of surgeons. RL has exploration mechanism, which can enable the robot to learn a desired policy without expert demonstration. However, the training process is not trivial while the reward function is difficult to specify.
Therefore, combining LfD with RL is an attractive and promising research direction. For example, a hybrid framework combining LfD and RL was proposed and integrated into a tendon-driven serpentine manipulator for robotic surgery [185] .
A new robot-assisted surgical training system was designed in [186] to improve the practical skills of surgeons, while a manipulation skill learning method for RAMIS was developed in [187]. The feasibility of the integration of LfD and RL for application in RAMS has been verified using an industrial robot to perform OCT-guided corneal needle insertions [35] .

Safety consideration
Although RAMS has many advantages with the integration of various robot learning algorithms, we need to consider the safety of algorithm deployment for RAMS.
Since the increased complexity of algorithms may lead to an opaque decision-making process, which is not acceptable for surgical operations. A safety monitoring system [188] was introduced to apply context-specific safety constraints on the motions of the robot. Another advanced safety monitoring system [189] was developed recently to realise real-time context-aware identification of erroneous gestures in robotic surgery.
In addition to safety monitoring mechanisms, explainable artificial intelligence (XAI) methods can be combined with RAMS to enhance model explainability. An effective surgical gesture recognition approach with an explainable feature extraction process was presented in [190]. This approach could provide RAMS with explainable information by showing the regions of the surgical images that had a strong relationship with the surgical gesture classification results.

Future outlook
Medical robotics research is evolving from macroscale towards microscale, while robot learning algorithms are worth developing for intelligent microsurgical robots. Most of the robot learning algorithms are used for robotassisted laparoscopic surgery. We envision that in the future, robot learning algorithms can be widely applied to RAMS and enable autonomous RAMS to bring more clinical outcomes by enhancing the operation precision and efficiency for microsurgery. Ongoing research will result in versatile micro-surgeons for precise and autonomous surgical intervention.

Conclusions
Recent technological advances in RAMS demonstrate the immense potential for wide microsurgery applications, such as ocular surgery, ENT surgery. Unintentional movements, inability to visualize small structures, insufficient dexterity may turn some delicate maneuvers to be physically impossible during microsurgery. The challenges of RAMS include the design and fabrication and versatile microsurgical tools with integrated imaging, sensing and autonomous control techniques.
Robot-assisted techniques are used to address the aforementioned challenges and accelerate the development of automation for RAMS. The use of robots can help realize the full potential of microsurgery with improved safety, accuracy and consistency, along with stateof-the-art sensing and imaging technologies. Design and manufacturing microscale robotic systems with a higher level of autonomy require interdisciplinary research, particularly the joint involvement from robotic and AI researchers, materials scientists, and clinicians.
The future of microsurgery is developing toward early and precision intervention, this requires the development of micro-instruments that can facilitate in vivo micro-scale surgical operation. We envision that the inherent challenges for RAMS will be addressed while robot-assisted techniques can enhance the efficiency of microsurgery and bring significant clinical benefits.

Open Access
This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made.
The images or other third party material in this article are included in the article′s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article′s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.