IMMERSIVE AUGMENTED REALITY APPLICATIONS IN JAW AND KNEE REPLACEMENT SURGERY: ACCURACY AND PROCESSING TIME RESULTS

 

Shahad Ahmed Salih a, Ali A. Albabawat b, Lamya Abdulateef Omer b,

Razwan Mohmed Salah bMohammed Hikmat Sadiq b

a Duhok Technical Institute, Duhok Polytechnic University, Duhok, Kurdistan Region, Iraq - sh.shahadahmed@gmail.com,

b Faculty of Science, University of Duhok, Duhok, Kurdistan Region, Iraq - (ali.abas, lamya.omer, razwan.mayi, mohammed.sadiq)@uod.ac

 

Received: 19 May., 2023/ Accepted: 31 Mar., 2024 / Published: 13 May., 2024.          https://doi.org/10.25271/sjuoz.2024.12.2.1159


ABSTRACT:

In medicine and healthcare, augmented reality (AR) has been used by physicians during surgical procedures. It has proved helpful in preoperative planning and procedure navigation by allowing them to display in-depth information and visualize details in real time during surgery, prioritizing patient safety and healthcare. Due to the critical nature of surgical procedures, extreme accuracy is required when using ar technology to maintain patients' health. A few years ago, ar faced several challenges and limitations in surgeries, such as noise in real-time images, cutting errors, navigation errors, wrong implant placement, overlay errors, navigating narrow areas, geometric accuracy limitations, image alignment, image registration, and occlusion handling. This paper reviews several recently published articles exploring ar technology usage in jaw and knee replacement surgery, focusing on identifying the newest technologies, methods, and solutions for the abovementioned limitations. Based on data collected from the published papers, the results were compared for each problem solved in each article regarding accuracy and processing time.

KEYWORDS: augmented reality, surgery, 2d/3d image registration, image segmentation, 2d/3d matching.


1.     INTRODUCTION

        Augmented Reality (AR) is proven to be extremely beneficial in the medical industry. AR technology is used in the training and education of Surgeons, which has proven successful in improving the performance of surgeons and increasing their degrees of qualification (Siff & Mehta, 2018), (Logishetty & Western, 2019), (Al Janabi et al., 2020), (Rochlen et al., 2017), (Borgmann et al., 2017), (Andersen et al., 2016), and (Bourdel et al., 2017). Some augmented reality systems were then developed for direct use in operations. AR provides surgeons with a 3d virtual model for the surgical field (Fida et al., 2018) and (Ayoub & Pulijala, 2019). Virtual and Augmented and Reality (VR/AR) are used in surgery on pre-operation planning and surgeon's training (Halabi et al.,2020), (Joda et al., 2019), (Farronato et al., 2019), and (Towers et al., 2019). AR is used in complex and sensitive surgical fields such as heart, kidney, brain, pelvis, thoracic, artery, and jaw surgeries through a video display, see-through or transparent display, and projection-based display (Suenaga et al., 2015) (Suenaga et al., 2013), (Murugesan et al., 2018), and (Tanji et al., 2022). Images are virtually generated to a real-time video stream in Videobased display overlays. It then creates a three-dimensional (3D) augmented reality view that supports the user's perception of depth, motion, and stereo parallax. To allow the user to see clearly through a transparent, see-through screen, images are superimposed on a transparent mirror/device.

        A transparent screen is an electronic interface that helps the viewer see what is displayed on and through the glass panel. Like transparent screens, projection screens overlay virtual images on projectors that users can view directly (Murugesan et al., 2018). Augmented Reality-based surgery creates an augmented vision for the surgeon by combining visual images from before surgery with real-time images from during surgery. AR provides surgeons in the operating room with more practical and understandable knowledge during operation, which will direct them through it (Andersen et al., 2016), (Bin et al., 2020). To assist the physicians in viewing virtual video on the computer, a Two-Dimensional (2D) virtual video system is created. AR has appeared as the cutting-edge technology in medical surgeries (Murugesan et al., 2018). AR provides a 3D perspective by superimposing different visual representations onto real-time images (Sielhorst et al., 2008). Several limitations and challenges have been included in AR-based surgery such as occlusion, real-time image noise, image registration, long processing times, and inadequate occlusion processing. This paper aims to discuss the limitations and challenges, and highlight the newest techniques in AR-based surgery including accurate results within the shortest processing time. Figure 1 shows various types of surgeries, including traditional surgeries, video-guided surgeries, and AR-guided surgeries (Murugesan et al., 2018).

       a) Traditional surgery                        b) Videoguided surgery

 

c) AR guided surgery

Figure 1: Current types of surgeries. (These images are downloaded using Google search engine, the image is free to use, share or modify, even commercially)

         Traditionally, operating room or surgical suite might point or face unique challenges and barriers for instance long work hours, follow-up patients, erroneous data, real-time access to patient imaging and information, etc. So, having AR can help surgeons to study 3D scans Pre-operative phase, implement proactive, storing images, and time-saving procedures.

         The rest of the paper is structured as follows. Section 2 presents the AR systems proposed in the field of surgery, and Section 3 introduces the AR technology system workflow based on three phases: Pre-operative phase, Intra-operative phase, and Pose-refinement phase. The evaluation of the proposed system based on the State of the Art of each paper reviewed is shown in Section 4. Section 5 then combines the results found and discusses them. Finally, the conclusions and future work are presented in Section 6.

2.     RESEARCH SCOPE

         The field of surgery is one of the most accurate areas that requires confirmation of the accuracy and speed of the devices and techniques used during surgery. Its main objective is to maintain the safety of patients and their lives. This systematic paper focused on jaw surgery (Bosc et al., 2019), (Wang et al., 2019), and (Carpinello et al., 2019), dental implants (Pellegrino et al., 2019) and (Kivovics et al., 2022), and knee replacement surgery (Tsukada et al., 2019) and (Goh et al., 2021). The following sections will explain the modus operation of AR in surgery.

         In 2021, Shrestha et al. (DATE) proposed AR system for dental surgery based on Enhanced-Iterative-Closest-Point (E-ICP). This scheme improves the convergence of the CT scan model of teeth with the real-time stereoscopic images of the patient's teeth, which provided high registration accuracy.

Basnet et al., (2018) proposed an augmented reality-based surgery system that improves navigation accuracy by eliminating noise and obstruction when the implant is placed about the jawbone to be cut or drilled. Furthermore, in jaw surgery, the AR system for narrow-area navigation has been proposed to reduce positional error within the surgery and increase navigational accuracy by positional error reduction (Budhathoki et al., 2020).

In addition, Murugesan et al (2018) proposed a system applied in Oral and Maxillofacial Surgery (OMS) focusing on AR regarding geometric accuracy. Regarding geometric accuracy, Murugesan et al (2018) proposed a system applied in Oral and Maxillofacial Surgery (OMS) based on AR. The video accuracy and depth perception have been improved in the proposed AR-system. On the other hand

         In 2019, Pokhrel et al proposed AR-system to decrease the cutting error to minimize the chance of chronic pain patients experience after knee replacement surgeries.

Both Bayrak et al., (2020) and Manohar et al., (2020) proposed a novel AR-system for oral and maxillofacial surgeries. This system aimed to improve the accuracy of the overlay while decreasing the processing time.

         In other ways, in Maharjan et al (2021) proposed a novel AR-system for knee replacement surgery for reducing registration error, minimize outcomes caught in local minima to enhance alignment, navigate occlusion, and optimize overlapping sections.

3.     SYSTEM WORKFLOWS

         This section presents several paper-worked on AR in surgery. These papers have been reviewed to recognize the newest technologies, methods, and tools of VR-System, and what have been achieved according to accuracy in a short processing time. Surgically, the implementation and use of AR technology move through three phases: Pre-operative, Intra-operative, and pose-refinement phases.

3.1. Pre-operative phase

         At this phase, Computed Tomography (CT) scans data of patients. CT scan data can provide precise details and information on bones and nerves compared to their medical counterparts. This phase is implemented through 3 steps: CT Image, Segmentation, and Reconstruction. These steps are mentioned by most of the reviewed papers in this work, as shown in Figure 2.


Figure 2: The pre-operative stage general steps

        In the proposed system of Shrestha et al., (2021), after the teeth CT scan is obtained, it will segment using a threshold-based segmentation technique that offers a precise surgical space. Details such as the orientation, position, and appearance of the implant are included in the reconstruction of the tooth model. After registration, the image containing this data is aligned with the stereo stream to create an AR scene. An optical tracking device is used for surgical drill calibration and providing precise drill positioning in real time.

        With other proposed systems by (Murugesan et al. (2018), (Shrestha et al., 2021), (Budhathoki et al., 2020), (Bayrak et al., 2020), and (Manohar et al., 2020) the CT data of the patients were taken and then have been segmented. Subsequently, a hierarchy model is created. This reconstruction allows matching the real-time images with this aspect graph (Offline phase) images that provide different segments of CT data from many angles and perspectives.

        The starting point is obtaining the CT scan then the data are to be divided for surgical areas to be obtained with the most precision, this system proposed by Maharjan et al. (2021). Therefore, using a segmentation threshold, an image is fragmented into foreground and background images. The knee surgical plan is based on the reconstructed image, which holds the interest region.

3.2. Intra-operative phase


         In this phase, three steps are used: Stereo or optical cameras, implementation of enhanced algorithms, and pyramid building of video frame, as demonstrated in Figure 3.

Figure 3: The intra-operative stage general steps

 

         Most of the proposed methods have used an optical camera because it has standard image size and quality, which are crucial and must be preserved in the surgical field, while on the other hand, this field requires less time for image processing. Therefore, this leads to increasing the quality of live video frames. Furthermore, depth perception can be improved by using stereo cameras and a transparent mirror.

         Using a roughly aligned intraoral scanner using Principal Component Analysis (PCA), Shrestha et al. (Sharesta et al., 2021) uses the preoperative CT scan-derived model and the resulting 3D model to obtain the three main aspects of the dental model. Therefore, this can be done using a zero-mean data matrix with Singular Value Decomposition (SVD). In addition to the gravity center, another four symmetrical points are set between the models. It can also be followed by comparing the computed tomography model obtained using the Iterative Closest Point (ICP) algorithm with the corresponding 3D model. Then, the rotation is calculated to reduce the distance between the corresponding points. SVD provides the perfect combination based on previous racing knowledge. At each iteration, SVD calculates accordingly.

        According to Basnet et al. (2018) the live surgical video was captured using optical cameras to generate video frames. The noise was removed from the image based on the Modified Kernel Non-Local Means (MKNLM) filter. The result showed that the edges of the objects and preventing contour leakage improved.

        Depending on the resolution, the images are sorted in a pyramid way, where low-resolution images stand at the highest level. The smallest resolution image is reserved for the Tracking Learning Detection (TLD) algorithm, which uses the Lucas–Kande median to track and find the bounding box of the object of interest. If the Lucas-Kande median fails to track the feature points because of occlusion on the object of interest, then it also leads to the TLD algorithm's failure. Therefore, the occlusion can be removed using image reconstruction, based on pixel classification, to detect the object of interest for improving the image reconstruction quality. Through the image hierarchy, a bounding box is observed that has determined the point of interest. This box can assist in reducing the search area as well as the processing time. For the initial alignment, Ulrich's method has been used. This method removes the need for manual registering initially and reduces human error possibility. As a result, this can be aided in generating the 2D pose object of interest. Identifying the best image can help to submit to 3D pose refining using the ICP algorithm.

         Budatuki et al. (2020) used an optical camera to record 2D surgical video in real-time, which is then transmitted to a TLD that monitors the surgical field using a bounding box. Afterwards, the bounding box views the 2D image that is imposed on the video frame. The motion history of the cutting tool is recorded, the cutting area is rechecked, and the remaining area for cutting is recalculated, all through using an optical camera. The surgical instrument is then sub-divided into two parts (e.g., shaft and effector), each with its own characteristics. A qualified Convolutional Neural Network (CNN) is used to track and detect the end effector, line features, and edge points in the shaft. When the 3D position is estimated, the input vector contains the instrument image descriptors and the output values are one of the 3D coordinates of the tip. Based on the position relative to the endoscopic image, the CT volume data showing the 3D position of the surgical instrument is displayed to obtain a foreground image of the chin. The color gradient function, which changes the pixel values of the image, preserves the image quality in real time, which is maintained by the function of color transfer, which changes the pixel values of the image. Automatically after initial alignment, Ulrich's approach performs the initial registration as the final step in the process.

          Murugesan et al. (2018) used two stereo cameras to capture the surgical video. Ulrich's approach was used to resolve initial alignment problems. This method generates an aspect graph from a series of segmented CT images that can be mapped to intraoperative images. This aspect graph creates multiple models that can be matched in real-time. The best algorithm has been used for feature extraction based on the TLD algorithm to decrease the operational time. From the gathered aspect graphs, the most related image is extracted with the help of matching online and offline stages. The stereo camera is ixed on the 3D display. The stereo camera views the patient’s body translucently through a translucent mirror. The camera videos are superimposed onto the 3D monitor to generate a 3D posture along with the image registration results.

         Bayrak et al., (2021) highlighted dividing the recorded video by the optical camera into video frames. An image resolution-based hierarchical model was designed that includes all frames to assist during online matching. In order to determine the area of interest, frames are fed into the TLD algorithm using the bounding box approach. It also decreased pixel cycle time by comparing pixels block by block instead of pixel by pixel. All frames go through the same procedure steps, and the improved TLD algorithm eliminates/minimizes the disruption caused by occlusions from the physician's hand or surgical instruments. The preoperative stage model performs the online matching and the real-time video frame.

        Manohar et al. (2020) suggested that it is possible to use a TLD algorithm to track the area of interest from a video frame captured during surgery to remove occlusion and then to implement image reconstruction. Another point was mentioned which is removing the MK-NLM filter since there are no vibrations in modern operating theatres. It includes the patient's facial freedom movement, which is unlikely to lead to vibration.

Pokhrel et al. (2019) emphasize that real-time video recorded with an optical camera sends it to a TLD tracking algorithm. The bounding box helps the TLD track the target region of interest. A 2D image is then added to the bounding box video frame. The optical camera records the motion history of the cutting tool and also scans the cutting area. The remaining area of the pieces is measured using volume subtraction. The function of displaying the size of the remaining cropped area produces a two-dimensional image using differential map technology. The Color Transfer function maintains the image accuracy in real time by transferring the pixel value to the image. The Ulrich method automatically initializes the knee model during the initial orientation. In conjunction with Ulrich's method, after applying the color transfer function, the remaining cut surface is determined using Markov random field surface reconstruction algorithm and a program called Markov random field surface reconstruction.

        Based on Maharjan (2021), the surgeon first determines the damaged cartilage from the CT results, and then the incision is made using stereo-based monitoring. The use of two stereo sensors achieves binocular vision. It is close to natural eyes in that it aids in distance vision when doing surgery—two stereo cameras aid in recording the surgical video and overlaying images over the patient's organ.

3.3. Pose-refinement phase

        In this phase, the output of the previous phase is used to form an AR visualization in three steps: matching best images to 2D pose, 3D pose refinement, and AR display, as illustrated in Figure 4.


Figure 4: The pose refinement phase

        The best-aligned selected images are used to construct a realistic 2D model. Then, the ICP algorithm is used to optimize the pose, resulting in an accurate 3D form. The AR video is produced as a final step by projecting the advanced 3D pose and the video stream on the display screen (Shrestha et al., 2021).

Basnet et al., (2018) aimed to remove the geometric error and reduce the probability of incorrect pose selection for enhancing model overlay with registration accuracy using the Rotational Matrix and Translation Vector (RMaTV) algorithm.

Budhathoki et al., (2020) modified the ICP algorithm to refine the pose overlaid on the surgical region in an iterative process to construct an accurate 3D form. The RMaTV algorithm improves precision by eliminating geometric errors. The stereo camera projects the advanced 3D pose of the surgical instrument, and the camera video streams onto a transparent mirror, which is positioned above the surgical region and observed by the surgeon during the operation.

        In 2018, Murugesan et al. used the ICP algorithm to carry out the pose refinement. It was done using the point-to-point technique to provide a precise 3D structure. The display device presented the advanced 3D pose and the camera video stream. This process has been leading to generating of AR video.

Later, Bayrak et al., (2020) refined two images that were given intraoperatively. Using the proposed error metric equation in (Manohar et al., 2020), which is used for the error matrix that modifies the Rigid and Non-Rigid bodies for achieving high accuracy, the rotation invariant measurement-based ICP is used to fix the initial parameters. As a result, the scheme can become more stable and accurate with all angles and occlusions that could arise. The proposed system's error metric is determined by the Manhattan error metric rather than the Euclidean error metric to save computing power.

        Furthermore, from image comparisons, the best-matching 2D image is selected. The 3D pose created by the aspect graph is improved from the 2D pose of video alignment to simulate AR viewing. In addition, the Modified Rigid and Non-Rigid Iterative Closest Points (MR-ICP and NR-ICP) are used. The error metrics ensure precision and rapid convergence for both ICP versions (Manohar et al., 2020).

         According to Pokhrel et al. (2019), the ICP refined the pose overlaid onto the surgical part in an iterative procedure. Applying the RMaTV algorithm improves accuracy by eliminating geometric errors.

        According to Maharjan et al. (2021), to improve ICP process, using the Bidirectional Maximum Correntropy Criterion (BIMCC) in 3D-ICP registration proved to be helpful. By cancelling outliers and non-Gaussian noise from the process of registration, alignment error reduction is improved. BIMCC is reliable and convenient, helping the correspondence between two cloud points and determining the optimum rigid transformation, thus improving accuracy and processing time. Then, images are modified through the Stereo Matching algorithm and the Gaussian filter helps eliminate geometric errors. Then, with a stereo camera, the operation scene is displayed on a transparent mirror mounted in the operation area and observed by the physicians during the operation.

          Figure 5 shows how the proposed system works for jaw surgery (Budhathoki et al., 2020), where A) Image Registration: Purpose: CT scan for jaw surgery. Description: Aligning CT images for surgery. B) Real-time Image: Purpose: Live 3D surgical view. Description: Real-time images enhance surgery. C) Image Overlay: Purpose: Highlighting jaw regions. Description: Overlay for surgical guidance.


Figure 5: a Real-time image. b) Image registration. c) Image overlay (Budhathoki et al., 2020)

Figure 6 shows how proposed system work for knee replacement surgery (Maharjan et al., 2021), where (a) Image Registration:



Aligning images for surgery planning. (b) Stereo Matching Algorithm with Stereo-based Tracking: 3D tracking during surgery. (c) Image Alignment with Correntropy Algorithm: Precise image alignment for surgery. (d) Bone Drilling Using Image Overlay: Guided bone drilling in surgery.


 

           (a)                                                       (b)                                                    (c)                                              (d)


Figure 6: a Image registration. b) Stereo matching algorithm with stereo based tracking. c) Image alignment with correntropy algorithm (d) Drilling bone using image overlay (Maharjan et al., 2021).


 

 

 

 

 

 

 

 

 

 

 

Table 1:  Proposed System Implementation and Evaluation of Paper Review.


 


Author

Area of Study

Problem

Technique Used

Accuracy

Processing Time

Software Used

Murugesan et al. [16]

Oral and Maxillofacial   Surgery

Geometric Accuracy and Image Registration

Rotational Matrix and Translation Vector Algorithm, by Incorporating Two Stereo Cameras and a Transparent Mirror

0.30 ~ 0.40 mm

Overlay error (video accuracy)

10 – 13

frames per second

MATLAB R2017a

Shrestha et al. [27]

Dental

Surgery

Incorrect Implant Placement

Enhanced Iterative Closest Point Algorithm for Reducing the Error, Weighting Mechanism and Median Value for Reducing Alignment Error, Random Sample Consensus Algorithm for Detect and Remove the Outlier

0.33 mm

Registration accuracy

14 frames per second

MATLAB R2019a

Basnet et al. [28]

Jaw Surgery

Noise in Real-Time Images

Removing Occluded Item, Using A Weighting-Based De-Noising Filter With Depth Mapping-Based Occlusion Removal

0.23 ~ 0.35 mm

Image overlay (video accuracy)

8 – 12 frames per second

MATLAB R2017b

Budhathoki  et al. [29]

Jaw Surgery

Navigating Narrow Areas

2D and 3D System Tracks the Surgical Tool, Which Consists of the Shaft and the Cutting Unit

0.25 ~ 0.35 mm

Video accuracy

11 - 14 frames per second

MATLAB R2019a

Bayrak et al. [30]

Oral and Maxillofacial Surgery

Image Registration When

Matching 2 Dissimilar Posture Images and Processing Time

Iterative Closest Point (ICP) Algorithm, That Is Mixture of a Rotation Invariant with Manhattan Error Metric

0.22 ~ 0.30 mm

Overlay accuracy

10 - 14 frames per second

MATLAB R2018b

Manohar et al. [31]

Oral and Maxillofacial Surgery

Occlusion in area of interest, reducing processing time and improving accuracy

Enhanced Tracking

Learning Detection (TLD), Modified Rigid and Non-Rigid Iterative Closest Point (MRaNRICP) using new error metric

0.22 ~ 0.29 mm

Overlay accuracy

10 – 13

frames per second

MATLAB R2018b, deep learning and image

processing toolboxes

Pokhrel et al. [32]

Knee Replacement Surgery

Cutting Errors

Volume Subtraction Technique

Minimizing the cutting error ~ 1 mm, 0.40 ~ 0.55 mm, Video accuracy

9 -10 frames per second

MATLAB R2017b

Maharjan et al. [33]

Knee Replacement Surgery

Image Registration and Alignment Image

Markerless Image Registration Method for Guiding and Visualizing the Surgical Procedure

0.57 ~ 0.61 mm, Video accuracy

 

7.4 - 11.74 frames per second

MATLAB R2019b

4.     PROPOSED SYSTEMS IMPLEMENTATION AND EVALUATION

         As shown in Table 1, the State of the Art of the eight proposed systems published papers have been selected, and results show that the video accuracy (precision) and processing time have been improved and enhanced. The data used in all these studies have been collected and used from online databases as well as YouTube videos. All these data are available for study, scientific research, and medical students. Overall, it is worth mentioning that the hardware and programs used in the implementation of such technologies have an impact on obtaining the results reached by each system and clearly and accurately affected the processing time.

5.     RESULTS AND DISCUSSION

         Based on the State of the Arts of the recent papers reviewed, the proposed AR systems mentioned in surgery have improved the accuracy and processing time. The process was done in three stages: the preoperative, the intra-operative, and pose refinement. The results are based on image registration and overlay depending on the patient and surgical tools' movement. The patient's CT scan images were used because they provided more accurate details and information compared with other medical images. An optical camera decreases system processing time by eliminating the need to convert the 3D to a 2D video frame during the online phase. It also enhanced system performance and aided in tracking the history of the operated-on region.

        Additionally, the live video frames quality was increased. Using two stereo cameras and a transparent mirror helped improving depth perception. Matching the objects in real-time videos from multiple angles (angles, rotation) was aided by the aspect graphs, in addition to generating a hierarchical structure of segmented images.

        The enhanced ICP algorithm integrated with the weighting mechanism reduced the error caused by false point matching by preventing multiple alignment errors from accumulating.

The RANdom SAmple Consensus (RANSAC) algorithm increases system speed by reducing the number of iterations and the outlier effect on the estimation.

         Alignment accuracy is improved by using an intra-oral 3D scanner to acquire a 3D model of the tooth, which achieves a 3D model with gangue that matches the model obtained by computed tomography. Moreover, the first registration has been done by using Ulrich's approach to avoid the possibility of human errors, which is possible when initial registration has to be done manually.

         Improved TLD use for long-term tracking and detection in real-time videos, using the bounding box, minimizes the search area in real-time movies, lowering processing time. Also, using the RMaTV algorithm eliminates geometric error, increases registration accuracy, and decreases image overlay. Moreover, using the ICP algorithm in conjunction with BiMCC supported alignment and registration improvement, maximized overlap between cloud points, and the elimination of registration results stuck in local non-Gaussian noise and minima, eliminating continuity noise, and eliminating outliers, thus ensuring high accuracy in registration.

        The proposed systems, in order to obtain results in a shorter processing time, were implemented using the MATLAB program. The MATLAB function is used to extract data points from images and videos. In addition, the matching function is used to select the best match, and the forward transform function is used to shift the points to overlap them

        Furthermore, using a video-based display produces more accurate results than see-through and projection-based displays (Murugesan et al., 2018). These techniques, tools, and algorithms were used in the proposed solutions to achieve the highly accurate results that the surgery needs in real-time operations.


6.     NCLUSION AND FUTURE WORK

         AR technologies used in surgery work simultaneously to provide better results for surgery and help physicians. ICP algorithms have been used to solve the limitations of noise removal, cutting errors, navigational errors, wrong implant placement, overlay errors, navigating narrow areas, geometric accuracy limitations, image alignment, image registration, and occlusion handling.

    To improve the performance of augmented reality surgical systems, the researchers used the TLD algorithm for tracking and detection, and the ICP and RMaTV algorithms for geometric error removal and reducing overlay errors, as well as enhancing image registration. They are, furthermore, using optical and stereo cameras to capture real-time video with a mirror in the operation room to direct the user's view. Hence, all these techniques have shown their effectiveness in providing high accuracy, improving and speeding up the processing time required by the medical field.

        In future research aimed at assisting surgeons, there is potential to enhance AR visualization by introducing features such as the ability to magnify and display object layers, thereby offering more precise information, as well as enabling object rotation

References

Siff, L. N., & Mehta, N. (2018). An interactive holographic curriculum for urogynecologic surgery. Obstetrics & Gynecology, 132, 27S-32S.

Logishetty, K., Western, L., Morgan, R., Iranpour, F., Cobb, J. P., & Auvinet, E. (2019). Can an augmented reality headset improve accuracy of acetabular cup orientation in simulated THA? A randomized trial. Clinical Orthopaedics and Related Research, 477(5), 1190.

Al Janabi, H. F., Aydin, A., Palaneer, S., Macchione, N., Al-Jabir, A., Khan, M. S., ... & Ahmed, K. (2020). Effectiveness of the HoloLens mixed-reality headset in minimally invasive surgery: a simulation-based feasibility study. Surgical Endoscopy, 34, 1143-1149.

Rochlen, L. R., Levine, R., & Tait, A. R. (2017). First person point of view augmented reality for central line insertion training: A usability and feasibility study. Simulation in healthcare: Journal of the Society for Simulation in Healthcare, 12(1), 57.

Borgmann, H., Rodríguez Socarrás, M., Salem, J., Tsaur, I., Gomez Rivas, J., Barret, E., & Tortolero, L. (2017). Feasibility and safety of augmented reality-assisted urological surgery using smartglass. World journal of urology, 35(6), 967-972.

 

Andersen, D., Popescu, V., Cabrera, M. E., Shanghavi, A., Gomez, G., Marley, S., ... & Wachs, J. P. (2016). Medical telementoring using an augmented reality transparent display. Surgery, 159(6), 1646-1653.

Bourdel, N., Collins, T., Pizarro, D., Bartoli, A., Da Ines, D., Perreira, B., & Canis, M. (2017). Augmented reality in gynecologic surgery: evaluation of potential benefits for myomectomy in an experimental uterine model. Surgical endoscopy, 31, 456-461.

Fida, B., Cutolo, F., di Franco, G., Ferrari, M., & Ferrari, V. (2018). Augmented reality in open surgery. Updates in surgery, 70(3), 389-400.

Ayoub, A., & Pulijala, Y. (2019). The application of virtual reality and augmented reality in Oral & Maxillofacial Surgery. BMC Oral Health, 19, 1-8.

Halabi, O., Balakrishnan, S., Dakua, S. P., Navab, N., & Warfa, M. (2020). Virtual and augmented reality in surgery. The disruptive fourth industrial revolution: technology, society and beyond, 257-285.

Joda, T., Gallucci, G. O., Wismeijer, D., & Zitzmann, N. U. (2019). Augmented and virtual reality in dental medicine: A systematic review. Computers in biology and medicine, 108, 93-100..

Farronato, M., Maspero, C., Lanteri, V., Fama, A., Ferrati, F., Pettenuzzo, A., & Farronato, D. (2019). Current state of the art in the use of augmented reality in dentistry: A systematic review of the literature. BMC Oral Health, 19(1), 1-15.

Towers, A., Field, J., Stokes, C., Maddock, S., & Martin, N. (2019). A scoping review of the use and application of virtual reality in pre-clinical dental education. British dental journal, 226(5), 358-366.

Suenaga, H., Tran, H. H., Liao, H., Masamune, K., Dohi, T., Hoshi, K., & Takato, T. (2015). Vision-based markerless registration using stereo vision and an augmented reality surgical navigation system: a pilot study. BMC medical imaging, 15(1), 1-11.

Suenaga, H., Hoang Tran, H., Liao, H., Masamune, K., Dohi, T., Hoshi, K., ... & Takato, T. (2013). Real-time in situ three-dimensional integral videography and surgical navigation using augmented reality: a pilot study. International journal of oral science, 5(2), 98-102.

Murugesan, Y. P., Alsadoon, A., Manoranjan, P., & Prasad, P. W. C. (2018). A novel rotational matrix and translation vector algorithm: geometric accuracy for augmented reality in oral and maxillofacial surgeries. The international journal of medical robotics and computer assisted surgery, 14(3), e1889.

Tanji, A., Nagura, T., Iwamoto, T., Matsumura, N., Nakamura, M., Matsumoto, M., & Sato, K. (2022). Total elbow arthroplasty using an augmented reality–assisted surgical technique. Journal of Shoulder and Elbow Surgery, 31(1), 175-184.

Bin, S., Masood, S., & Jung, Y. (2020). Virtual and augmented reality in medicine. In Biomedical information technology (pp. 673-686). Academic Press.

Sielhorst, T., Feuerstein, M., & Navab, N. (2008). Advanced medical displays: A literature review of augmented reality. Journal of Display Technology, 4(4), 451-467.

Bosc, R., Fitoussi, A., Hersant, B., Dao, T. H., & Meningaud, J. P. (2019). Intraoperative augmented reality with heads-up displays in maxillofacial surgery: a systematic review of the literature and a classification of relevant technologies. International journal of oral and maxillofacial surgery, 48(1), 132-139.

Wang, J., Shen, Y., & Yang, S. (2019). A practical marker-less image registration method for augmented reality oral and maxillofacial surgery. International journal of computer assisted radiology and surgery, 14, 763-773.

Carpinello, A., Vezzetti, E., Ramieri, G., Moos, S., Novaresio, A., Zavattero, E., & Borbon, C. (2021). Evaluation of hmds by qfd for augmented reality applications in the maxillofacial surgery domain. Applied Sciences, 11(22), 11053.

Pellegrino, G., Mangano, C., Mangano, R., Ferri, A., Taraschi, V., & Marchetti, C. (2019). Augmented reality for dental implantology: a pilot clinical report of two cases. BMC Oral Health, 19(1), 1-8.

Kivovics, M., Takács, A., Pénzes, D., Németh, O., & Mijiritsky, E. (2022). Accuracy of dental implant placement using augmented reality-based navigation, static computer assisted implant surgery, and the free-hand method: an in vitro study. Journal of Dentistry, 119, 104070.

Tsukada, S., Ogawa, H., Nishino, M., Kurosaka, K., & Hirasawa, N. (2019). Augmented reality-based navigation system applied to tibial bone resection in total knee arthroplasty. Journal of Experimental Orthopaedics, 6(1), 1-7.

Goh, G. S., Lohre, R., Parvizi, J., & Goel, D. P. (2021). Virtual and augmented reality for surgical training and simulation in knee arthroplasty. Archives of Orthopaedic and Trauma Surgery, 141, 2303-2312.

Shrestha, L., Alsadoon, A., Prasad, P. W. C., AlSallami, N., & Haddad, S. (2021). Augmented reality for dental implant surgery: enhanced ICP. The Journal of Supercomputing, 77, 1152-1176.

Basnet, B. R., Alsadoon, A., Withana, C., Deva, A., & Paul, M. (2018). A novel noise filtered and occlusion removal: navigational accuracy in augmented reality-based constructive jaw surgery. Oral and maxillofacial surgery, 22, 385-401.

Budhathoki, S., Alsadoon, A., Prasad, P. W. C., Haddad, S., & Maag, A. (2020). Augmented reality for narrow area navigation in jaw surgery: Modified tracking by detection volume subtraction algorithm. The International Journal of Medical Robotics and Computer Assisted Surgery, 16(3), e2097.

Bayrak, M., Alsadoon, A., Prasad, P. W. C., Venkata, H. S., Ali, R. S., & Haddad, S. (2020). A novel rotation invariant and Manhattan metric–based pose refinement: Augmented reality–based oral and maxillofacial surgery. The International Journal of Medical Robotics and Computer Assisted Surgery, 16(3), e2077.

Manohar, S., Alsadoon, A., Prasa, P. W. C., Salah, R. M., Maag, A., & Murugesan, Y. (2020, November). A Novel Augmented Reality Approach in Oral and Maxillofacial Surgery: Super-Imposition Based on Modified Rigid and Non-Rigid Iterative Closest Point. In 2020 5th International Conference on Innovative Technologies in Intelligent Systems and Industrial Applications (CITISIA) (pp. 1-10). IEEE.

Pokhrel, S., Alsadoon, A., Prasad, P. W. C., & Paul, M. (2019). A novel augmented reality (AR) scheme for knee replacement surgery by considering cutting error accuracy. The international journal of medical robotics and computer assisted surgery, 15(1), e1958.

Maharjan, N., Alsadoon, A., Prasad, P. W. C., Abdullah, S., & Rashid, T. A. (2021). A novel visualization system of using augmented reality in knee replacement surgery: Enhanced bidirectional maximum correntropy algorithm. The International Journal of Medical Robotics and Computer Assisted Surgery, 17(3), e2223.