Abstract
Computer-assisted surgical navigation systems have gained popularity in surgical procedures that demand high amounts of precision. These systems aim to track the real-time positioning of surgical instruments in relation to anatomical structures. Typically, state-of-the-art methods involve tracking reflective 3D marker spheres affixed to both surgical instruments and patient anatomies with infrared cameras. However, these setups are expensive and financially impractical for small healthcare facilities. This study suggests that a fully optical navigation approach utilizing low-cost, off-the-shelf parts may become a viable alternative. We develop a stereoscopic camera setup, costing around $120, to track and monitor the translational movement of open-source based fiducial markers on a positioning platform. We evaluate the camera setup based on its reliability and accuracy. Using the optimal set of parameters, we were able to produce a root mean square error of 2 mm. These results demonstrate the feasibility of real-time, cost-effective surgical navigation using off-the-shelf optical cameras.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Avoid common mistakes on your manuscript.
1 Introduction
Computer-assisted surgical navigation systems have revolutionized how patients are treated in challenging medical procedures (Nijmeh et al. 2005). These systems monitor the location of surgical instruments in relation to specific areas of interest on the patient. From here, guiding systems help surgeons plan trajectories that minimize risk of unintended anatomical damage (Hassfeld and Mühling 2001). In recent years, dedicated navigation platforms have been developed for use in fields such as neurosurgery, orthopedic surgery, and maxillofacial surgery (Zhang et al. 2019).
The effectiveness of surgical navigation systems hinges on their ability to continuously track anatomical structures and instruments. Conventionally, surgeons employing free-hand techniques must divert their attention from the patient toward external screens to confirm anatomical positions through medical scans (Burström et al. 2021; Mezger et al. 2013). The integration of instrument and anatomical tracking into the surgeon’s workflow eliminates the need for such redirection.
While there are many applications where surgical navigation can be implemented, these cases typically fall into two categories: monitoring actively moving instruments over time and guiding instruments to fixed locations. The former, common in applications of neurosurgery, demands systems to track surgeons’ instrument movements (Pivazyan et al. 2023). The latter, common in applications of spinal surgery, necessitates systems guiding instruments precisely to specific points on the patient’s body (Wallace et al. 2020). In this context, instrument mobility is restricted with an emphasis on achieving greater precision. Applications of how instruments are tracked in operating room environments vary from procedure to procedure, but they generally utilize either fixed or moving markers. In both cases, markers are usually attached to instrument heads, but the range of movement depends on the procedure. In applications of spinal surgery or pain management procedures, markers are fixed as surgeons attempt to make incremental needle insertions toward anatomies. This is contrasted toward operations that require tracking markers as surgeons actively move their instruments. Although the maximum amount of error tolerated in navigation systems depends on the application, typically under 2 mm of Euclidean error is accepted (Morley et al. 2023).
Current state-of-the-art systems provide surgeons with high positional precision and improve the success of surgical operations (Mezger et al. 2013). These systems revolve around sensing the depth of objects using infrared (IR) cameras. Specifically, retro-reflective markers are attached to medical instruments that can be detected by near-infrared stereoscopic cameras (Mezger et al. 2013; Wu et al. 2019). To achieve depth perception, stereoscopic cameras utilize a two-camera system, which infers positional information based on the difference between the captured images (Smith et al. 2012). Yet, despite advances in medical technology, the cost of implementing such camera systems can be a barrier for small healthcare centers, independent practices, and training purposes (Asselin et al. 2018). For instance, novel 3D navigation systems have been developed by several commercial companies. While these systems are innovative, these systems can cost anywhere from $250,000 to $600,000 (Malham and Wells-Quinn 2019).
Recent improvements in consumer-grade cameras have opened the door for fully optical, or videometric, tracking implementations. For instance, researchers have successfully implemented an iPhone-based augmented reality navigation setup for brain lesion localization (Hou et al. 2016). This technology is considerably more accessible to healthcare institutions compared to more premium alternatives due to its low cost (Sorriento et al. 2020). However, optical tracking systems are unable to utilize the reflective properties of traditional IR markers.
Here, we developed a low-cost, fully optical tracking system using off-the-shelf, readily available inexpensive web cameras (Fig. 1). To facilitate position tracking, we utilize open-source based fiducial markers as an additional cost-effective measure. This study aims to validate the potential of using low-cost alternatives in surgical navigation. To do this, we design a series of experiments that enable the real-time capture of a marker’s movement. By analyzing the error associated with inexpensive stereo tracking, we aim to gain insights into the feasibly of similar systems in surgical applications.
2 Related work
Surgical navigation systems serve diverse functions and offer various advantages for medical practitioners (Kraus et al. 2010). Broadly speaking, these systems can be classified either as those that provide additional guidance during surgical procedures, or those that perform certain steps of the procedure autonomously, contingent upon the user’s discretion (Musahl et al. 2002). These systems employ a diverse set of tracking techniques, including infrared, electromagnetic, and optical approaches, to facilitate their operation.
Commercial implementations of surgical navigation systems have been offered by medical technology companies for decades. Notably, there are systems available in the market designed to enhance the precision of spine surgeries through intraoperative image-based guidance. By attaching infrared (IR) reflective markers to surgical instruments and fiducial infrared markers to bony locations via a back incision, IR cameras can accurately track instrument movements. These systems can cost anywhere between $365,000 and $505,000 (Rossi et al. 2021).
To eliminate the necessity for extra skin incisions during marker insertion, there have also been non-invasive commercial implementations for pedicle screw replacement surgeries. These systems utilize markers equipped with IR LEDs to serve as a positional reference. Smaller markers, with IR LEDs attached, can be affixed to instruments to track them. By detecting the infrared LEDs, the movement of the instruments with respect to the patient can be monitored. The cost of such systems start from $215,000 and can cost as much as $350,000 (Rossi et al. 2021).
The utilization of infrared markers and cameras can significantly increase the cost of surgical navigation systems. To address this, researchers have explored various techniques for surgical navigation, including mechanical tracking, electromagnetic tracking, ultrasound tracking, and more. In particular, Sharma et al. proposed adopting electromagnetic tracking for performing implant surgery utilizing monotonic variations in magnetic fields along the X, Y, and Z axes (Sharma et al. 2021). By attaching microchips to an implant within the body and another to a surgical tool, the devices concurrently measured and relayed magnetic field information from their respective locations to an external receiver. Stenmark et al. discusses the possibility of using videometric tracking of 3D dice in navigation systems (Stenmark et al. 2022). The combined information of the geometry of the die, and its position in each frame, coupled with fiducial markers, was used to estimate the subsequent frame’s position using the solvePnP (point and perspective) algorithm.
Regardless of the type of navigation system used, it is vital that accurate tracking be performed in real time. In this paper, we contribute to optical tracking by introducing a low-cost stereoscopic camera to enable depth perception and investigate the concept of videometric tracking using disparity maps generated from stereoscopic camera images combined with fiducial markers.
3 Stereoscopic camera implementation
Here, we develop an inexpensive tracking system, as seen in Fig. 1a, using off-the-shelf parts. We calibrated a stereoscopic camera setup using two Logitech C920x web cameras. At the time of publication in 2023, these cameras individually retail for $60. This makes the total cost of the tracking system significantly cheaper compared to state-of-the-art systems. Figure 2 displays our stereo camera setup. The baseline distance, defined as the center-to-center distance between the two cameras, is 12.5 cm.
We calibrated our stereoscopic camera system using a calibration checkerboard with 8 by 11 vertices, in accordance with calibration checkerboard documentation (Zhang 2004).
4 Open-source fiducial marker tracking
Our marker tracking approach utilizes the ArUco marker package from OpenCV to facilitate positional tracking. ArUco markers are a square-based fiducial marker system specifically designed for camera pose estimation (Garrido-Jurado et al. 2014). Leveraging fiducial marker tracking allows for the versatile tracking of both instruments and anatomies, depending on the marker’s placement. For instance, positioning a marker on a surgical instrument enables spatial tracking of the instrument, while positioning a marker on a patient serves as a reference point for relevant anatomical features. In this sense, the versatility of marker tracking has applications in designing low-cost systems for general navigated surgery, as opposed to just being an implementation for a specific domain. As the marker moves, the stereoscopic camera tracks the marker’s center position using OpenCV-based functions.
Given the marker centers from the left \((x_l, y_l)\) and right \((x_r,y_r)\) cameras, we can compute the disparity between the markers. Disparity, d, is formally defined as the horizontal difference between the two marker centers. Using the focal length f and baseline distance b of the cameras, we are able to compute the 3D position of the marker with respect to the camera.
In practice, calculating disparity involves using more sophisticated algorithms rather than directly using Eq. (1). The most common algorithms include Stereo Block Matching (StereoBM) and Stereo Semi-Global Block Matching (StereoSGBM) (Hirschmuller 2008). StereoBM, implemented in OpenCV, employs a small sum of absolute difference (SAD) window to identify matching points between the left and right images. Disparity is then calculated as the horizontal pixel difference given matching points (Kim et al. 2020). StereoSGBM, alternatively, implements pixel-wide matching using Mutual Information, which approximates a global 2D smoothness constraint (Hirschmuller 2008). In practice, StereoSGBM has been shown to produce more accurate disparity maps than StereoBM, and is less susceptible to outliers (Stenmark et al. 2022). Therefore, StereoSGBM was used in this study to compute values of disparity.
Using StereoSGBM, we can rewrite Eq. (1) as a function of its OpenCV implementation given the left \(L_v\) and right \(R_v\) video frames. The actual disparity value used comes from the disparity of the depth map at the marker center.
5 Experimental setup
To test the accuracy of our stereoscopic camera system, we developed a positioning platform to move the marker in the X and Y directions (Tsui et al. 2023a). In 3D space, the marker can be referred to as having the coordinates (x, y, z). The platform spans 500 mm by 500 mm and considers motion in 2D. As shown in Fig. 3a, we use two motors to move the marker around in a square pattern. The use of a positioning platform enables us to perform consistent, repeatable tracking trials focused solely on evaluating the system’s accuracy. Information regarding the accuracy of the positioning platform has been left in Appendix 3.
During our experiments, we placed the stereoscopic camera system directly above the positioning platform at a height of 500 mm. As the marker was shuttled around according to Fig. 3b, the stereoscopic camera captured and recorded the position of the marker. Table 1 displays the parameters used during testing. The ArUco markers utilized are 40 mm by 40 mm.
Upon capturing a live video feed of the marker moving on the positioning platform, we experimented with converting the video frames to different color spaces. Each color space employs distinct sets of parameters to alter the video’s color characteristics. By applying different color spaces to the video frames, we aim to assess which one facilitates the smallest error. Notably, the measurement speed of the system is determined by the frames per second (fps) of the camera. In this implementation, we capture a live video feed at 30 fps.
Marker tracking tests were performed in Red, Green, and Blue (RGB), Hue, Saturation, and Lightness (HSL), and Hue, Saturation, and Value (HSV) color spaces. For HSL and HSV spaces, we manually thresholded values of Lightness and Value to separate the marker from the background.
In a previous study, we performed a similar experiment using the same three color spaces. However, rather than computing disparity using StereoSGBM, we calculated disparity using Eq. (1). We observed minimal differences in error among color spaces. Across all color spaces, the average error in marker tracking remained consistently around 5.5 mm (Tsui et al. 2023b). We noticed that the error differences between color spaces were minute. Additionally, the detection percentage of the marker, defined as the instances of correct detection relative to the total frames it was presented, were equivalent. In this set of experiments we look to answer the following question: does a stricter lower bound of Lightness (L) and Value (V), which runs the risk of lowering detection percentage, result in lower error? In these experiments, when thresholding Lightness and Value, the lower threshold for L and V was empirically chosen to be 55%. A total of five experiments were performed for each color space.
6 Proposed tracking algorithm
The proposed tracking algorithm given below aims to capture the 3D position (x, y, z) of the ArUco marker A. Given that A is detected in the left \(L_v\) and right \(R_v\) video frame, the function find pixel location () is called with the help of the ArUco package to return the pixel locations of the marker in the left \((x_l,y_l)\) and right \((x_r,y_r)\) video frame. get disparity () is also called to return the disparity value at the center of the ArUco marker. Using the focal length f and baseline b values, one can compute the 3D position.
7 Results
We divide our results into two sections to examine both the system’s reliability and accuracy. In Fig. 4a, we report the detection percentage of the markers in the different color spaces, as well as the percentage of outliers removed. Quantitatively, we eliminated outliers using the Random Sample Consensus (RANSAC) algorithm. In our implementation, RANSAC estimates a linear model to the data, such that detected outliers do not influence the estimates. As such, RANSAC allows us to robustly filter out outliers. We report in Fig. 4b the average percentage of outliers removed. It can be clearly seen that even though the detection percentage of the HSL and HSV color spaces has significantly decreased compared to RGB due to the Lightness and Value thresholding, this does not appear to reduce the amount of outliers filtered out. We also note that in the RGB case, where no thresholding is applied, the marker is detected upwards of 99% of the time.
Using the filtered data, we compute the root mean square error (RMSE) of the experimental positions with respect to their theoretical values, as given in Table 2. Here again, a reduction in detection percentage does not appear to lower the error. Furthermore, while the RMSE of the positional readings is relatively low across all spaces, achieving this comes at the expense of filtering out roughly 25–30% of the data. Keeping the video frame in the RGB color space produces the lowest error estimate of roughly 2 mm while also filtering out the least amount of data.
8 Discussion
Detailed experimentation with fiducial markers interfaced with a basic stereoscopic camera system demonstrates the potential for a fully optical, low-cost surgical navigation system. Given a $120 budget with an open-source marker implementation, this study reveals that achieving an error that hovers around 2 mm is possible using inexpensive off-the-shelf components. However, it is important that the 2 mm of Euclidean error tolerance includes both translation and rotation, whereas our study is only conducted in two degrees of freedom. Future considerations to be given to accurately track marker rotations during pitch, yaw, and roll. For example, one possible implementation of this system would be to use a five degree of freedom robotic arm to mimic surgeon movements.
One additional caveat to these results is the number of outliers that have to be removed in post-processing. In our tests, we noticed that the majority of outliers originate from noisy disparity maps. Addressing these outliers through filtering real-time disparity maps would enhance the system’s robustness. Additionally, our study primarily evaluates the accuracy of a stereoscopic system in translational movements. We broadly classify surgical procedures into two categories: procedures that utilize fixed markers on instruments to get incrementally close to patient anatomies, such as in pain management procedures, and procedures that actively track surgeon-instrument movements, such as in applications of neurosurgery. Our system is intended to handle instances of fixed markers with minimal rotation, rather than surgeon-instrument tracking.
Importantly, using a stereoscopic camera with an ArUco marker is not the only way to capture its 3D position. In fact, ArUco supports utilization of the solvePnP algorithm to approximate both translation and rotation. In our empirical tests, we found that the translation and rotation matrices generated by ArUco vary substantially. As such, we opted to utilize stereoscopic vision to capture position. One future consideration toward low-cost surgical navigation is how to compute the 3D position of markers using non-ArUco based methods. Scaling up this system as-is for usage in operating rooms may prove to be a challenge due to variables impacting detection. While ArUco markers are effective for fast prototyping, their detection relies strongly on camera distance and how large the marker is. While recent literature suggests the feasibility of using 3D ArUco markers, considerations need to be made toward utilizing more effective detection methods without sacrificing the low-cost nature of ArUco (Stenmark et al. 2022).
While our implementation successfully achieves a 2 mm error, we aim to further reduce the error in our system. For instance, upgrading the stereoscopic system with higher-quality cameras is guaranteed to enhance accuracy and diminish the number of outliers. Another avenue is experimenting with positional data fusion from a multi-stereoscopic camera system. Lastly, implementing filtering on the disparity maps produced by Block-Matching algorithms would address tracking inconsistencies. For example, filters such as the Weighted Least Squares (WLS) filter work to smooth out the edges of the disparity map by imposing weighted least squares regularization on the image. Such filters can contribute to smoothing out the output, potentially reducing the presence of outliers (Farbman et al. 2008).
9 Conclusion
In this study, we have designed a fully optical tracking system comprised of off-the-shelf, low-cost parts and open-source fiducial markers. We designed and calibrated a stereoscopic camera to record the 3D position of a moving ArUco marker. Average error, detection percentage, and proneness to outliers were used to evaluate the positioning accuracy of our system in various color spaces. Using optimal experimental settings, we obtained a root mean square error of 1.84 mm. The results suggest the possibility of developing a real-time, cost-effective surgical navigation system.
Availability of data and material
All calibration and tracking codes developed, as well as the videos used for these studies, are available on our GitHub repository (https://github.com/darintsui/StereoNavigation).
References
Asselin M, Lasso A, Ungi T et al (2018) Towards webcam-based tracking for interventional navigation. Medical Imaging 2018: image-guided procedures. Robot Intervent Model 10576:534–543. https://doi.org/10.1117/12.2293904
Burström G, Persson O, Edström E et al (2021) Augmented reality navigation in spine surgery: a systematic review. Acta Neurochir 163(7):843–852. https://doi.org/10.1007/s00701-021-04708-3
Farbman Z, Fattal R, Lischinski D et al (2008) Edge-preserving decompositions for multi-scale tone and detail manipulation. ACM Trans Gr 27(3):1–10. https://doi.org/10.1145/1360612.1360666
Garrido-Jurado S, Muñoz-Salinas R, Madrid-Cuevas FJ et al (2014) Automatic generation and detection of highly reliable fiducial markers under occlusion. Pattern Recogn 47(6):2280–2292. https://doi.org/10.1016/j.patcog.2014.01.005
Hassfeld S, Mühling J (2001) Computer assisted oral and maxillofacial surgery - a review and an assessment of technology. Int J Oral Maxillofac Surg 30(1):2–13. https://doi.org/10.1054/ijom.2000.0024
Hirschmuller H (2008) Stereo processing by semiglobal matching and mutual information. IEEE Trans Pattern Anal Mach Intell 30(2):328–341. https://doi.org/10.1109/TPAMI.2007.1166
Hou Y, Ma L, Zhu R et al (2016) A low-cost iphone-assisted augmented reality solution for the localization of intracranial lesions. PLoS ONE 11(7):e0159185. https://doi.org/10.1371/journal.pone.0159185
Kim DT, Cheng CH, Liu DG et al (2020) Designing a new endoscope for panoramic-view with focus-area 3d-vision in minimally invasive surgery. J Med Biol Eng 40:204–219. https://doi.org/10.1007/s40846-019-00503-9
Kraus MD, Krischak G, Keppler P et al (2010) Can computer-assisted surgery reduce the effective dose for spinal fusion and sacroiliac screw insertion? Clin Orthoped Relat Res 468(9):2419–2429. https://doi.org/10.1007/s11999-010-1393-6
Malham GM, Wells-Quinn T (2019) What should my hospital buy next?—guidelines for the acquisition and application of imaging, navigation, and robotics for spine surgery. J Spine Surg 5(1):155–165. https://doi.org/10.21037/jss.2019.02.04
Mezger U, Jendrewski C, Bartels M (2013) Navigation in surgery. Langenbecks Arch Surg 398(4):501–514. https://doi.org/10.1007/s00423-013-1059-4
Morley C, Arreola D, Qian L et al (2023) Mixed reality surgical navigation system; positional accuracy based on food and drug administration standard. Surg Innov. https://doi.org/10.1177/15533506231217620
Musahl V, Plakseychuk A, Fu FH (2002) Current opinion on computer-aided surgical navigation and robotics: role in the treatment of sports-related injuries. Sports Med 32(13):809–818. https://doi.org/10.2165/00007256-200232130-00001
Nijmeh AD, Goodger NM, Hawkes D et al (2005) Image-guided navigation in oral and maxillofacial surgery. Br J Oral Maxillofac Surg 43(4):294–302. https://doi.org/10.1016/j.bjoms.2004.11.018
Pivazyan G, Sandhu F, Beaufort A et al (2023) Basis for error in stereotactic and computer-assisted surgery in neurosurgical applications: literature review. Neurosurg Rev 46(1):20. https://doi.org/10.1007/s10143-022-01928-8
Rossi VJ, Wells-Quinn TA, Malham GM (2021) Negotiating for new technologies: guidelines for the procurement of assistive technologies in spinal surgery: a narrative review. J Spine Surg. https://doi.org/10.21037/jss-21-107
Sharma S, Telikicherla A, Ding G et al (2021) Wireless 3d surgical navigation and tracking system with 100\(\mu \)m accuracy using magnetic-field gradient-based localization. IEEE Trans Med Imaging 40(8):2066–2079. https://doi.org/10.1109/TMI.2021.3071120
Smith R, Day A, Rockall T et al (2012) Advanced stereoscopic projection technology significantly improves novice performance of minimally invasive surgical skills. Surg Endosc 26(6):1522–1527. https://doi.org/10.1007/s00464-011-2080-8
Sorriento A, Porfido MB, Mazzoleni S et al (2020) Optical and electromagnetic tracking systems for biomedical applications: a critical review on potentialities and limitations. IEEE Rev Biomed Eng 13:212–232. https://doi.org/10.1109/RBME.2019.2939091
Stenmark M, Omerbašić E, Magnusson M et al (2022) Vision-based tracking of surgical motion during live open-heart surgery. J Surg Res 271:106–116. https://doi.org/10.1016/j.jss.2021.10.025
Tsui D, Jo M, Nguyen B et al (2023a) Optical surgical navigation: A promising low-cost alternative. Paper presented at the 45th annual international conference of the IEEE engineering in medicine & biology society (EMBC), Sydney, Australia. https://doi.org/10.1109/EMBC40787.2023.10340384
Tsui D, Melentyev C, Rajan A et al (2023b) An optical tracking approach to computer-assisted surgical navigation via stereoscopic vision. Paper presented at the ASME 2023 32nd conference on information storage and processing systems, Milpitas, California, USA. https://doi.org/10.1115/ISPS2023-111020
Wallace N, Schaffer N, Freedman B et al (2020) Computer-assisted navigation in complex cervical spine surgery: tips and tricks. Spine Surg 6(1):136–144. https://doi.org/10.21037/jss.2019.11.13
Wu H, Lin Q, Yang R et al (2019) An accurate recognition of infrared retro-reflective markers in surgical navigation. J Med Syst 43(6):153. https://doi.org/10.1007/s10916-019-1257-x
Zhang Z (2004) Camera calibration with one-dimensional objects. IEEE Trans Pattern Anal Mach Intell 26(7):892–899. https://doi.org/10.1109/tpami.2004.21
Zhang M, Wu B, Ye C et al (2019) Multiple instruments motion trajectory tracking in optical surgical navigation. Opt Express 27(11):15827–15845. https://doi.org/10.1364/oe.27.015827
Funding
Research towards this project was supported in part by the University of California, San Diego General Campus Research Senate Grant #2019201 and the Galvanizing Engineering in Medicine (GEM) initiative.
Author information
Authors and Affiliations
Contributions
DT designed the methodology, wrote the main manuscript text, and prepared Figs. 1, 2, 3 and 4. KR, CM, MT, AR helped design the methodology, conducted the results and wrote sections of the manuscript text and prepared Figs. 1 and 3. MJ wrote sections of the manuscript text and provided clinical advice. FA provided clinical expertise and advice toward the design of the system. FT provided research direction on the project. All authors reviewed the manuscript.
Corresponding author
Ethics declarations
Conflict of interest
The authors declare no Conflict of interest.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendices
Appendix 1. Calibration protocol
Before any experiments can be performed, the two cameras first need to be calibrated in order to find the cameras’ intrinsic parameters. These parameters are used by OpenCV to analyze the 3D positions of the ArUco marker.
To calibrate the cameras, a calibration checkerboard with 8 by 11 vertices is used. 30 images of the checkerboard were taken on each web camera. For each image taken, we introduce slight rotations to the checkerboard and adjust the distance between the cameras and the checkerboard. These images were iteratively run through a Python script with OpenCV in order to track the checkerboard corners and compute the intrinsic parameters. More information and the calibration scripts used can be found on our GitHub page.
Appendix 2. Color spaces
Hue, Saturation, Lightness (HSL) and Hue, Saturation, Value (HSV) are color spaces that aim to represent colors more intuitively to humans than Red, Green, Blue (RGB). Specifically, the defining feature of HSL/HSV is that they aim to separate brightness information (modeled as either Lightness or Value) from chromatic information (modeled as Hue and Saturation). The main difference between HSL and HSV is the way they model lighter colors. For HSL, generating pure white requires maximizing Lightness to 100%. However, for HSV, generating pure white requires maximizing Brightness to 100% and decreasing Saturation to 0%.
Appendix 3. Positioning platform error analysis
To verify the accuracy of the theoretical position in tracking experiments, we move the marker on the positioning platform in both the X and Y axes in fixed increments. At each increment, the true displacement of the marker was measured against the fixed increment. We developed two tests to measure the accuracy of our positioning platform: the Reset and the Increment test.
In the Reset test, we move the marker in fixed 10 mm increments, resetting it to the origin after each movement. For example, when examining the X direction, the marker platform initiates with a 10 mm movement in the first trial. Subsequently, it returns to the origin before moving 20 mm in the X direction. This process is repeated for both X and Y directions, ranging from 10 to 300 mm in 10 mm increments, for three trials. The Increment test follows a similar procedure to the Reset test, except the marker is not returned to its original position after every trial. In both the X and Y directions, the marker platform moves in 10 mm increments until it reaches 300 mm for three trials. Table 3 reports the positioning errors associated with both tests.
Appendix 4. Tracking error analysis
To process the experimental data, we separated the positioning platform data into its individual movements according to Fig. 3. We applied a scaling factor of (120 mm/segment distance) to each individual movement to get the real-life distance travelled.
To determine the theoretical marker position corresponding to each experimental data point, we identified the initial and final theoretical positions for each individual movement and translated these values into 3D coordinates based on the stereoscopic camera readings. Using the fact that the positioning platform is moving at constant velocity, we perform linear regression to model the theoretical movement using the isolated corner points. For each movement, we interpolated along the experimental axis with the greatest variance. For instance, when the positioning platform moved horizontally or diagonally, we utilized the experimental x value to obtain the theoretical \(\hat{y}\) value. Conversely, in vertical movements, we used the experimental y value to derive the interpolated theoretical \(\hat{x}\) value. We report the RMSE for each trial using these theoretical points with the experimental values.
While calculating RMSE, we made sure to remove obvious outliers in the experimental data. We utilize Random Sample Consensus, or RANSAC, to approximate a linear regression model. This allows us to identify outliers without the need for manual thresholding.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Tsui, D., Ramos, K., Melentyev, C. et al. A low-cost, open-source-based optical surgical navigation system using stereoscopic vision. Microsyst Technol (2024). https://doi.org/10.1007/s00542-024-05668-1
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s00542-024-05668-1