Instrument Failures for the da Vinci Surgical System: a Food and Drug Administration MAUDE Database Study
Our goal was to analyze reported instances of the da Vinci robotic surgical system instrument failures using the FDA's MAUDE (Manufacturer and User Facility Device Experience) database. From these data we identified some root causes of failures as well as trends that may assist surgeons and users of the robotic technology.
We conducted a survey of the MAUDE database and tallied robotic instrument failures that occurred between January 2009 and December 2010. We categorized failures into five main groups (cautery, shaft, wrist or tool tip, cable, and control housing) based on technical differences in instrument design and function.
A total of 565 instrument failures were documented through 528 reports. The majority of failures (285) were of the instrument’s wrist or tool tip. Cautery problems comprised 174 failures, 76 were shaft failures, 29 were cable failures, and 7 were control housing failures. Of the reports, 10 had no discernible failure mode and 49 exhibited multiple failures.
The data show that a number of robotic instrument failures occurred in a short period of time. In reality, many instrument failures may go unreported, thus a true failure rate cannot be determined from these data. However, education of hospital administrators, operating room staff, surgeons, and patients should be incorporated into discussions regarding the introduction and utilization of robotic technology. We recommend institutions incorporate standard failure reporting policies so that the community of robotic surgery companies and surgeons can improve on existing technologies for optimal patient safety and outcomes.
KeywordsRoboticsLaparoscopyDa Vinci surgical robotEquipment failureSurgery
Intuitive Surgical’s da Vinci Surgical System (Sunnyvale, CA) was granted FDA approval in July 2000. Since then, numerous papers have been published focusing on patient outcomes data, cost comparisons to open and laparoscopic techniques, and emerging robotic technologies [1–3], but little literature has focused on the mechanical failures of the da Vinci system. Several series have described the rates of unrecoverable faults of the robotic arms or the console requiring conversion to open or conventional laparoscopy or outright case cancellation [4–8]. We have instead focused on the portion of the hardware that experiences some of the most intense mechanical stress: the surgical instruments. A survey of the literature for instrument failure descriptions and rates yielded single-center analyses [9–11] and reports based on data reported voluntarily by surgeons or institutes [12, 13]. In addition, two papers [14, 15] reported on a single instance of instrument failure.
Rather than contacting surgeons and asking them to fill out a survey, Andonian et al.  reported on data available through the MAUDE database. The MAUDE (Manufacturer and User Facility Device Experience) database  is run by the United States Food and Drug Administration. Surgeons and institutions can voluntarily and anonymously report adverse events (defined as “potential and actual product use errors and product quality problems” ) and the product manufacturer can provide a response. In many cases, the response includes specific descriptions of the damage caused to the device by the failure as well as a suspected cause. Andonian et al. summarized the reports that were posted as of August 27, 2007, and found a total of 168 reports for the da Vinci Surgical System. Of the 168 reports, 43 (or 26 %) were reports of instrument failure. For each instrument failure, the type of instrument is listed but the failure mechanism is not reported.
Andonian et al.  acknowledge the limitations of their study in the discussion section of their paper. The limitations include the fact that reporting failures via the MAUDE database is voluntary and therefore it is likely that not all failures are reported. Some reports may have also been missed because they were filed with an incorrect brand or company name. Even knowing that their survey most likely did not capture all da Vinci system failures, Andonian et al. attempted to estimate an overall failure rate of the system. Their estimate is based on an estimate of the number of robotic urologic procedures that had been performed as of the time of their survey. Overall, they estimated a total system failure rate of 0.38 %. Using the same data, the instrument failure rate is 0.13 % (or 13 instrument failures out of every 10,000 procedures). This estimate is far lower than is reported elsewhere and almost certainly underreports the total number of failures.
Patients and clinicians deciding on the appropriate surgical approach, and medical institutions deciding on outcomes and cost, need to be informed of the realities of applying robotic technology to medicine. In an effort to provide this information, we have conducted a survey of the MAUDE database, and tallied up-to-date robotic instrument failures to derive an understanding of the baseline problem and suggest future improvements that could be made to robotic instrumentation.
Materials and methods
Using the MAUDE database, we performed a survey of failure events that occurred between January 2009 and December 2010, listed "Intuitive Surgical" as the manufacturer, and were reported as of January 25, 2011. We did not include non-instrument failures or instances where the failure was caused by an avoidable user error. Some of the reports appeared to be duplicates, which we excluded. If it was unclear, we erred on the side of overreporting. In comparison to the data presented by Andonian et al. , where only 43 instrument failures were reported in more than seven years, our survey found 528 reports of instrument failure in two years. The increased number of reports is most likely due to the larger number of robotic procedures being completed each year; it may also reflect a higher rate of voluntary reporting.
For each report, we attempted to identify the root failure whenever possible. If the instrument was returned to Intuitive Surgical, we assumed their analysis of the failure was correct. If the instrument was not returned to Intuitive Surgical, we assumed the site correctly reported the mode of failure.
Institutions may choose from several reporting options when filing a failure report. The easiest option requires the institution to download a free application from the FDA's website. The application guides the user through filling out the standardized reporting form and then submits the report electronically.
Wrist or tool-tip failures
The most commonly reported failures involved the instrument’s wrist or tool tip. Of the 285 reported wrist or tool-tip failures, some involved failures of more than one wrist or tool-tip component. It is unknown whether wrist and tool-tip failures occur more frequently than other types of failures or if they are simply more frequently reported because they are most easily noticed. The latter may be the case, since the surgeon is looking at or is near the tool tips for most of a standard operation.
Wrist or tool-tip failure modes observed in da Vinci instruments
Number of failures
Jaw or tool tip broke or cracked
Jaw or tool tip bent
Cracked or broken pulley
Distal clevis or proximal clevis broke or cracked
Other wrist component (covers, etc.) broke
Report says “instrument broke” or “piece broke off” but not enough detail is provided to narrow failure down to a specific component
Scissor blades are dull
Jaws won’t open or close, reason unknown
Cautery instrument failures
Cautery failure modes observed in da Vinci instruments
Number of failures
Instrument damage observed preoperatively that would likely result in an arcing event
Arcing event observed intraoperatively
Thermal damage to tissue noticed intraoperatively, without having observed arcing
Instrument smoking or melting, or a burning smell noticed intraoperatively, without observing arcing
Arcing evidence observed postoperatively by hospital staff
Intuitive Surgical found evidence of arcing
Instrument failed to cauterize
Damaged conductor wires
Table 3 lists a total of 184 cautery failures, which occurred in a total of 174 reports. In nine of the reports, damaged conductor wires were observed, and evidence of arcing was also present but not due to the damaged conductor wires. In a tenth report, the user noticed damage to an instrument preoperatively but then used the instrument in a procedure and observed arcing intraoperatively.
Instrument shaft failures
Shaft failure modes observed in da Vinci instruments
Number of failures
Shaft had scratches or material removal aligned with the tube axis (likely caused by rubbing in the port or cannula)
Shaft had scratches or material removal not aligned with the tube axis (likely caused by instrument collision or mishandling)
Material broke off shaft, or shaft splintered or cracked
Shaft broke between the proximal clevis and main tube interface
Shaft broke circumferentially
Shaft had a crack or hole, in some cases leading to an arcing incident
Cable and control housing failures
The last two categories of failures were cable failures and control housing failures. A total of 29 cable failures were reported; 28 were reports of the cable breaking or fraying at the tool-tip end of the instrument, and one was a report of the cable slipping off a pulley at the tool-tip end of the instrument. Seven incidents of the control housing failing were also reported. Two reports said that the instrument could not be removed from the robot arm. Two reports said that the housing broke. One report said that the instrument would not register, even though it had uses remaining. One report said that the cleaning nozzle had been pushed into the housing. The last report said that the instrument jaws would not open and close while attached to the robot arm but that they would if the dials were spun manually.
Both categories of failure occurred less commonly than expected based on previous literature. Nayyar and Gupta  reported 11 cable failures out of 23 total failures, with the other 12 being control housing failures. Kim et al.  reported 6 control housing failures out of 19 total failures.
Monopolar curved scissors instrument failures
Of the 150 reports of an instrument jaw or tool-tip breaking or cracking, 108 involved the monopolar curved scissors instrument. In most of those failures, one of the scissor blades broke off, fell into the patient, and was recovered. In addition, of the 174 cautery instruments that experienced an electrical failure, 125 were monopolar curved scissors. 116 of the reports were arcing incidents and another six were incidents where smoking, melting, or a burning smell was noted intraoperatively.
The MAUDE database represents an objective source of reported instrument failures, yet it inherently underrepresents the true denominator of instrument errors because proper tracking requires timely and accurate reporting from the surgeon, the OR staff, and the hospital, each having different incentives and challenges to actually making the report. If a hospital chooses to report, it may choose to report all failures, no failures, or only a subset of failures. In addition, because the reporting is anonymous, there is no way to consider the failures on an institution-by-institution basis, a point brought up by Andonian et al. . Even the instrument manufacturer may not know the true numerator if hospitals fail to report instrument failures to the company, and the true denominator (either the number of instruments on the market or the total number of instrument uses) is known only by the company itself. Our data do show, however, that many more failures exist in the database now than had been cited by Andonian et al. They observed only 43 instrument failures over more than seven years, whereas our survey found 528 reports of instrument failure in two years. The increased number of reports may be due to the increased adoption and utilization of robotics, but it may also reflect a higher rate of voluntary reporting.
An additional, less obvious, source of reporting bias may be that instruments returned to Intuitive Surgical had more detailed descriptions of the failure mechanism and were more likely to exhibit evidence of multiple types of failure. Some inconsistency is also added because different people at Intuitive Surgical wrote the damage reports and may have focused on different components of the instrument. For example, of the 28 total reports of scratching on the shaft, 13 occurred in January 2009 alone, and only 5 occurred in all of 2010. Although it is possible that this particular failure was truly less prevalent, another explanation could be that personnel changes or reporting practice changes occurred, yielding disparate data.
Failures must be observed by the user in order to be reported. In this analysis, we listed many types of failures, some of which are easier to miss than others. For instance, an instrument that fails to register when attached to the robot arm is more likely to be noticed than an instrument that has a cable starting to fray. Likewise, a surgeon is more likely to notice that one of the jaws on her scissors has fallen off than she is to notice that a small piece of the wrist covering or shaft covering has fallen off.
Even knowing that their survey most likely did not capture all da Vinci system failures, Andonian et al.  attempted to estimate an overall failure rate of the system. Their estimate is based on an estimate of the number of robotic urologic procedures that had been performed as of the time of their survey. Overall, they estimated a total system failure rate of 0.38 %. Using the same data, the instrument failure rate is 0.13 % (or 13 instrument failures out of every 10,000 procedures). This estimate is far lower than is reported elsewhere and almost certainly underreports the total number of failures. Kim et al.  retrospectively identified 19 instrument failures that occurred over 1,797 robotic surgeries performed between July 2005 and December 2008 at their institution. Nayyar and Gupta  prospectively identified 23 instrument failures out of 340 robotic procedures performed between July 2006 and March 2009 at their institution. These two reports yielded a range of instrument failures from 1.1 to 6.8 % [10, 11]. The variation in failure rates between institutions may be due to several factors, including the length of the study and the number of surgeons and procedures included, all of which modify the portion of the learning curve over which the study was conducted. The types of failures reported (e.g., Mues et al.  included only cautery failures, whereas Kim et al.  and Nayyar and Gupta  reported no cautery failures) will also produce variation. Whether data were captured prospectively or retrospectively could also have influenced the results of these reports.
Later studies conducted by Andonian et al.  and by Kaushik et al.  looked at failures that occurred at many different institutions. Kaushik et al.  sent a web-based survey to a group of urologists and asked them to report on past failures they remembered having experienced with the da Vinci system. Of the 260 reports of failures he received, only 21 (or 8 %) were reports of instrument malfunctions. The limitation of that study, however, was that it was based on recall which inherently biases the results. In contrast, Kim et al.  found that instrument malfunctions made up 44 % of the overall types of robot failures experienced at their institution, and Nayyar and Gupta  reported that 62 % of their institution’s failures were instrument malfunctions. Failure rates calculated from recall-based studies are likely biased toward significant failures that resulted in patient injury or that required conversions, and they can be expected to underreport less serious failures such as instrument failures that added only a short delay.
When looking specifically at individual classes of failures, it is unknown whether wrist and tool-tip failures occur more frequently than other types of failures or if they are simply reported more frequently because they are most easily noticed. The latter may be the case since the surgeon is looking at, or near, the tool tips for most of a standard operation. Although we cannot conclude a failure rate for monopolar curved scissors, the data suggest that this style of instrument usually fails in two specific ways: either a jaw breaks off or an arcing event occurs. Redesigning the instrument’s tool tip to strengthen the jaws around the cable crimp could help reduce the number of broken jaws. Modifying the design of the disposable covers would also improve the instrument’s reliability. The disposable covers are very difficult (and require a fairly large amount of force) to install correctly. The database descriptions for this type of instrument indicate that the disposable covers are also somewhat fragile and can be damaged or torn if the covers are installed incorrectly before use or if the instrument tips collide during a procedure.
The reported number of observed arcing incidents with the monopolar curved scissors may be considered slightly inflated as several hospitals responded to observed arcing by replacing the disposable tips on the pair of scissors and then continuing to use them for the procedure. In some cases, this resulted in two or three reports of the same type of failure from a single procedure, before the instrument itself was pulled or a new batch of tips was used. Even if half of the 78 reports of observed arcing were eliminated, however, the monopolar curved scissors still account for 64 % of all cautery instrument failures. The large number of failures for the monopolar curved scissors is less surprising when taking into account the fact that it is perhaps the most commonly used of all da Vinci instruments. The study performed by Mues et al.  reported on failures of the tip cover accessories for the monopolar curved scissors instrument experienced at a single institution over a period of seven months. Their prospective study identified 12 such failures out of 454 robotic procedures, for a failure rate of 2.6 %. The Mues et al. study identifies the tip cover accessory as a likely point of failure but unfortunately does not report any other instrument failures that occurred during the same time period.
Mues et al. , Kim et al. , and Nayyar and Gupta  identified failures that fit into each of the five failure categories. The distribution of failures over the five categories does not at all match the distribution we saw in our survey. Mues, Kim, and Nayyar together reported more than twice as many failures of the control housing as we found in our survey, despite our survey including ten times the total number of failures, whereas we found 285 wrist or tool-tip failures compared to their two. Although the failure categories are represented in the three papers [9–11], not all failure modes within each category are reported.
The variability in previous reports substantiates the need to create standard reporting practices from all institutions. Industry improvements need to be driven by objective metrics, and when describing surgical techniques to patients, accurate data are paramount to ensure that the patient is properly informed about his/her procedure’s risks and benefits. A review of methods to engineer improved instrumentation would be valuable but was outside the scope of this analysis.
The data show that the number of robotic instrument failures reported to MAUDE has dramatically increased in parallel with the rapid expansion of robotic surgery. In reality, many instrument failures may go unreported, thus a true failure rate cannot be determined from these data. However, education of hospital administrators, operating room staff, surgeons, and patients should be incorporated into discussions regarding the introduction and utilization of robotic technology. Because our data show such a significant increase in reported failures from the previous reports, we believe that it is imperative to know the true numerators and denominators. This can be achieved only by standardized reporting of instrument failure by surgeons and hospitals. Furthermore, our data show nonuniform reporting in the level of detail about the instrument failures. This increase in nonuniformity is concerning, and more concerning is that underreporting is certainly a confounder in the true failure numbers. We, as surgical robot users, will be unable to apply customer pressure and demands for improved instrumentation without standard reporting processes in place to know the full scope of the problem. We recommend that institutions incorporate standard failure reporting policies so that the community of surgical robot companies and surgeons can improve on existing technologies for optimal patient safety and outcomes. In addition, as with training to manage other operating room equipment failures, training to familiarize the OR staff with what to do in the event of a robotic instrument failure should be standardized.
The authors thank Seattle Children’s Hospital for providing access to expired da Vinci surgical instruments for study.
No competing financial interests existed at the time this research was completed and this paper written. Since that time, however, Diana Friedman has accepted a position at Intuitive Surgical in an unrelated part of the company.