1 Introduction

The concepts of distance estimation and precise measurements are of paramount importance in various academic disciplines and research endeavors. The determination of distance plays a crucial role in aiding the operations of construction, industrial, and autonomous vehicles. In the study conducted by Kuenzel (2016), an examination is undertaken to explore the perception of space-distance in human beings. Norman (2005) asserts that there has been extensive research conducted on the estimation of accessibility. Thompson (2002) highlights the significance of estimating distance. According to Toye's (1986) findings, it was determined that the estimated distance and visual angle exhibit equivalence.

The determining factor for location is visibility. Toye's research findings indicate that humans possess the ability to discern between objects that are in close proximity and those that are at a distance. According to McCready (1985) and Norman (2005), it was posited that distance ratios possess a higher degree of precision compared to depth ratios. The findings of his study suggest that individuals possess the ability to accurately assess distance ratios in both indoor and outdoor environments. The authors of the study conducted by Norman et al. (2017) did not take into account the variable of visual perception ratio of vertical distances.

Geisler's study showcased the potential of utilizing two-dimensional retinal images to infer distance, three-dimensional shape, and distance distributions within natural systems. According to Burge Geisler (2013), individuals demonstrate a greater proficiency in estimating ratios as opposed to lengths. Furthermore, the process of estimating distances necessitates the utilization of vertical vision. In Viguier's (2001) study, a combination of lasers, ultrasonic sensors, computer vision, and neural networks was employed to ascertain distance estimations. According to Tarro's (2012) research, there is an accurate estimation of the distance between built environments.

The utilization of image processing techniques in conjunction with 360-degree cameras enables the efficient examination and evaluation of various structures. During the early 2000s, fish-eye photography was widely observed. Aerial photography facilitates the acquisition of quantitative remote sensing data and the creation of three-dimensional computer models; Gurtner (2009), Greene (1986), Amad (2012), Wróyski et al. (2020), and Loddo (2021). These technologies are capable of quantifying the spatial separation between an image and an object. This is employed in the domains of transportation, urban planning, and architecture.

Regan and Spekreijse (1977) investigated the concepts of distance perception, visual angles, and spatial geometry. In a study conducted by Beier (2019), it was found that visual angles exhibit a higher level of effectiveness compared to physical angles. Gogel's (1998) findings contradicted prevailing perceptions regarding sight and space. Foley (1975) posits that human beings tend to inaccurately estimate distances. Enhancing the accuracy of distance estimation. Fukusima (1997) conducted a study wherein precise measurements of viewing angles were developed, without taking into consideration the potential presence of visual-spatial conflict. In his study, Foley (2004) employs a technique of ratio normalization to calibrate vision. The process of determining the distance between two points was conducted by Levin and Haber (1993a, 1993b) provides evidence that visual field offsets are greater than initially expected and are not influenced by the distance at which an object is viewed.

Researchers here employ the term "sky view factor" (SVF) in their investigations pertains to the extent of the visual field that is observable in front of the eye, given the presence of a stationary object positioned between the eye and the fovea. (Zakšek et al., 2011) The measurement of radiation using Steyn's (1980) fish-eye lens was based on the concept of the SVF. The computation of the sky view factor has been conducted through the utilization of solar radiation and diffusion ratios, as outlined in previous studies (Gurnsey et al., 2010; Oke, 1981). However, the potential correlation between the sky view factor and visual perception or field of vision has yet to be explored.

Current experiment employed the use of virtual sky view factor (VSVF) and field of view (FOV) as by Kastendeuch (2012). In their study, Sosa et al. (2014) utilized image processing techniques to ascertain the values of Vertical Spatial Frequency similar to VSVF and FOV in current experiment. (Table 1) According to the study conducted by Kim et al. (2016), it was suggested that the generation of fisheye images could be achieved by utilizing 360-degree panoramas. Table 2 presents a comprehensive field of view (FOV) ratio.

Table 1 Simulation models establishment
Table 2 Image Processing via ED-IP (The content was generated by utilizing the area pixel count of the field of view (FOV) derived from the image processing techniques outlined in this study, along with regression analysis)

To estimate image distance the study examines the visual angle that is concerned with determining the dimensions of an object depicted in a photograph and the utilization of the sky view factor. This paper aims to provide a comprehensive analysis of the theoretical framework, research methodology, obtained results, and potential implications for future research and practical applications.

2 Methods

2.1 A. Research summary

The present study involved the consideration and implementation of the following procedures;

  1. a.

    The researchers selected a virtual reality (VR) corridor as the experimental setting because it offers a reduced level of distraction (Gu et al., 2020a, 2020b).

  2. b.

    In this study, students utilized computer vision and virtual reality (VR) technologies to make estimations of width and height at specific points.

  3. c.

    360-degree images were created in order to calculate distances, drawing inspiration from their application in robotic and autonomous vehicles (Iizuka, 1987).

  4. d.

    The process of calculating distances in this study incorporated stereo matching and structure-from-motion techniques for the 360-degree images.

  5. e.

    Multiple angle cameras were utilized, resembling the stereo image capture configuration described by Wan (2008).

  6. f.

    Scene distances were estimated through the process of image comparison.

  7. g.

    The utilization of optical flow was employed in order to estimate distances by analyzing image motion (Chukanov, 2021).

  8. h.

    Various novel techniques for estimating distance were investigated through the utilization of convolutional neural networks (CNNs) (Amirian, 2020).

  9. i.

    The consideration of distance estimation from 360-degree images was also discussed by Kiran (2020).

  10. j.

    In the study conducted by Touahni (2022), the researchers employed VR cameras and Structure from Motion (SfM) techniques to estimate distances within a virtual environment.

  11. k.

    Structure-from-Motion (SfM) photogrammetry was utilized as a methodology for the reconstruction of three-dimensional (3D) scenes.

  12. l.

    Both fish-eye and conventional images were utilized for Structure-from-Motion (SfM) techniques.

  13. m.

    Geometric equations were formulated in order to estimate distances.

  14. n.

    The study utilized the fish-eye triangulation technique—a method that involves the utilization of camera parameters and feature locations as a means to address the distortion caused by the lens (DING, 2021).

  15. o.

    Geometric equations were modified in order to achieve precise distance estimation.

  16. p.

    Fish-eye images, which possess the ability to capture a broad field of view, were employed.

  17. q.

    Adjustments and calibration procedures were executed in order to optimize the accuracy of the 3D reconstruction.

  18. r.

    The utilization of sky view factor (SVF) calculations was employed for indoor spaces, taking into account obstructions (requiring calibration).

  19. s.

    The integration of field of view (FOV) and indoor sky view factor (SVF) was utilized to estimate distance, taking into account the pixel height.

  20. t.

    The aim of this study is to explore the development of a novel straight-gaze 360-degree field of view (FOV)

  21. u.

    The calculation of the field of view (FOV) is determined by the parameters of the camera sensor and lens.

  22. v.

    The technique of estimating distance by utilizing the field of view (FOV) and objects with known heights. (Table 2)

  23. w.

    The AR experiment utilized the camera features of SketchUp VR.

  24. x.

    The process of distance estimation incorporates the use of both panoramic and field of view (FOV) images.

  25. y.

    Fisheye panoramas were generated through the utilization of equal-distance projections. (Table 1)

  26. z.

    The process of feature matching was successfully accomplished through the utilization of SIFT/SURF algorithms, as described by Teke (2011).

Figure 1 depicts the primary methodology employed to render distance estimation perceptible. By conducting all of the evaluations, we were able to assess the inclinations of students to either overestimate or underestimate the dimensions of corridors as displayed in Fig. 2.

Fig. 1
figure 1

Illustrates the schematic flow of a 360-degree simulation of human vision utilizing fish-eye lenses. The determination of the angle of view is contingent upon the visual capabilities of the human eye. (The designs were created using JW CAD and SketchUp software by the authors). The author(s) of this manuscript are responsible for creating all tables and figures, unless explicitly stated otherwise

Fig. 2
figure 2

Students overestimate and underestimate corridor width and height (based on the data provided by the attending students)

2.2 B. (FOV) and (SVF)

Fish-eye images encompass the entirety of the visual field perceivable by the human eye; however, they introduce image distortions that result in a reduction in the precision of distance estimation. The estimation of image scale and distance is influenced by the field of view, which can be calculated as follows:

$$average\;viewpoint=height\;above\;ground/tan(FOV/2)$$
(1)

The field of view (FOV) refers to the angular extent of the observable scene captured by a fish-eye lens. In order to establish a connection between the field of view and distance estimations, it is necessary to consider the impact of the field of view on the scale of the image. The relationship between the image scale and the field of view can be expressed by the following equation:

$$image\;scale=(sensor\;size\times distance)/(focal\;length\times object\;size).$$
(2)

By incorporating the concepts of field of view (FOV) and image scale, it is possible to adapt the equation in order to determine the distance between a camera and an object with a known size. The calculation of the SVF for indoor spaces can be determined using the subsequent formula:

$$SVF = Avisible / Aground$$
(3)

The presence of walls, furniture, and other objects poses challenges to the measurement of indoor sky view factor (SVF). This phenomenon results in spatial variation of the Spatial Variation Factor (SVF) within the room, thereby requiring the integration of images from multiple rooms to address this concern. The estimation of sky view factor (SVF) is influenced by the lighting conditions within a room. In the context of indoor structured visual field (SVF) analysis, the presence of intricate geometrical features and specific lighting conditions necessitates the need for calibration and processing. Conversely, in a virtual reality (VR) setting, such constraints are not required. The formula for calculating distance is given by the equation:

$$distance=(object\;height\times image\;sensor\;size)/(2\times tan(FOV/2)\times pixel\;height\times SVF).$$
(4)

Our objective was to develop a panoramic field of view (FOV) of 360 degrees using either fish-eye or wide-angle lenses. The focal point of the image will be positioned at the center and surrounded by a circular shape. The concept of circumference pertains to the visual periphery. The formula to calculate the angular field of view (FOV) for an image is as follows:

$$\theta = 2 \times arctan (d / (2 \times f))$$
(5)

In this equation, θ represents the angular field of view, d corresponds to the diagonal measurement of the camera sensor, and f denotes the focal length of the lens. The equidistant projection is a cartographic technique that results in the flattening of circular fields of view.

$$x=r\times sin\;\mathit{()}\;and\;y=r\times cos\;\mathit{()},$$
(6)

The equations for x and y are given by x = r × sin(θ) and y = r × cos(θ), where r represents the distance between the center of the field of view (FOV) image and a specific point on the image, and θ represents the angle associated with that point.

In order to identify the characteristics of the images, the SIFT/SURF algorithm was utilized to extract the features of each image. According to Yang (2009), Moisan's (2004). According to Ruan (2009), camera distance estimation involves determining the spatial positions of objects within a three-dimensional scene in order to ascertain their respective distances from the camera. Next step involves the calculation of the Area (in square pixels), Area Fraction (in percentage), and Aspect Ratio. The following equations are used to calculate the area fraction and aspect ratio of Area C.

$$Area\;(px2)=number\;of\;pixels\;in\;Area\;C$$
(7)
$$Area\;Fraction\;(\%)=(Area\;C/Total\;image\;area)\;x\;100$$
(8)
$$Aspect\;Ratio=width\;of\;Area\;C/height\;of\;Area\;C$$
(9)
$$SVF=(Area\;of\;sky/Total\;image\;area)\;x\;100$$
(10)

The equation ED = B(TD) + G(VA) can estimate distance. The regression coefficients for actual distance and visual angle are denoted as B and G, respectively. The estimation of the corridor's endpoint is derived from the fish-eye image obtained in Step 1 and the analysis of Area C. The image processing technique represents the termination point of a corridor.

$$VA=2\times arctan(D/2L)$$
(11)

The variable D represents the diameter of the fish-eye image, while the variable L represents the distance between the camera and the corridor.

$$ED\;estimates\;TD.$$
(12)

equations presented below are employed to generate the data displayed in table [a]:

$$Area\;(px^2)=Number\;of\;pixels\;in\;the\;object$$
(13)
$$Area\;Fraction\;(\%)=(Area\;of\;Object/Total\;Area\;of\;Image)\;x\;100$$
(14)
$$Aspect\;Ratio=Major\;Axis\;Length/Minor\;Axis\;Length$$
(15)

Figure 3 presents the aspect ratio associated with each study zone.

Fig. 3
figure 3

a The conversion process from fisheye perspective to field of view (FOV) involves considering various parameters, including the virtual sky view factor (VSVF), aspect ratio (AR), point of view (PV), corridor height (H) measured in centimeters, and corridor width (W) also measured in centimeters. The transformation of a panorama to a viewing angle. (The table a content was generated by utilizing the area pixel count of the field of view (FOV) derived from the image processing techniques outlined in this study, along with regression analysis). Table 3 presents a comprehensive breakdown of the aspect ratio pertaining to each study zone

In order to address the analytical component of the project, a total of seven three-dimensional (3D) models were created. To capture comprehensive visual data, a virtual fish-eye camera with a 360-degree field of view was utilized to capture photographs from all possible angles. According to Luo et al. (2016), However, when the survey is conducted at two-meter intervals (as illustrated in Fig. 4), the field of view alters every six meters. Thus, a two-meter radius falls within this range, ensuring that all possible points are captured. This is because the angle of view does not undergo significant changes over such distances. The study conducted by Newhall in 1956.

Fig. 4
figure 4

a zone one, fraction, filled b minimum distance visible

The ceilings in the hallways were removed as the typical adult refrains from rotating their head while ambulating. When an average adult, with a height of 170 cm, takes steps of 75 cm while walking straight ahead without rotating their head, the resulting angle of view to the ground is approximately 45 degrees. This calculation can be performed utilizing basic principles of trigonometry. If a right triangle is constructed, where one leg corresponds to half of the individual's height of 85 cm, and the other leg represents half of their stride length of 37.5 cm, the angle between the horizontal ground and the line connecting the person's eye to the center of their field of view (which is assumed to be directly in front of them) can be determined by evaluating the inverse tangent (arctan) of the ratio of these two lengths.

$$Tan\;(theta)=adjacent/opposite=85/37.5$$
(16)

theta is equal to arctan (85/37.5) 63.4 degrees.

However, this particular perspective encompasses the complete range of visual perception from the earth's surface to an individual's eye level. Given our focus solely on the angle relative to the ground, it is necessary to subtract half of said angle, as the eye is positioned at an approximate midpoint between the ground and the individual's height.

$$\text{theta}\_\text{ground}\:=\:(63.4/2)\;31.7\;\mathrm{degrees}.$$

Given our focus on the complementary angle, which refers to the angle formed between the line of sight and the ground, we can derive this angle by subtracting it from 90 degrees.

$$\mathrm{view}\;\mathrm{angle}\;=\;90-\;31.7\;\approx\;58.3\;\mathrm{degrees}$$

Consequently, the angular separation between the eye of an individual and the central point of their visual field is estimated to be approximately 58.3 degrees. In order for the SVF concept to function effectively, it is imperative to exclude ceilings from consideration. To clarify, it can be stated that view frame adjustments primarily exhibit a linear characteristic. Given that the visual perspective is directed downwards (Loomis et al., 1996), individuals tend to concentrate their visual attention on vertical elements such as walls, rather than on the ceiling or windows, within such environments. According to the study conducted by Sakamoto et al. (2010).

Prior to its presentation, the circular fisheye image underwent a conversion process to an equirectangular projection. The horizontal and vertical field of vision (FOV) of the camera were subsequently determined based on the camera's specifications. The equation provided was utilized to ascertain the viewing angle of each pixel, taking into consideration the aforementioned factors and the size of the image.

$$\mathrm{Visual}\;\mathrm{angle}=2\times\arctan\;(0.5\ast\mathrm{FOV}/\mathrm{imagesize})$$
(17)

The non-sky regions of the fisheye photographs were detected and eliminated through the application of thresholding techniques and morphological operations. The experiment was conducted in order to identify the stromal vascular fraction (SVF). The remaining territories were officially designated as aerial zones. The SVF was subsequently computed utilizing the following formula:

$$\mathrm{SVF}=(\mathrm{sky}\;\mathrm{area}/\mathrm{total}\;\mathrm{area})\;\times100\;\mathrm{percent}$$
(18)

The calculation of distance was accomplished by integrating the estimated visual angle and the SVF through the utilization of the subsequent equation;

$$\mathrm{Distance}=\mathrm{object}\;\mathrm{height}/(2\times\mathrm{cosin}e\;(\mathrm{visual}\;\mathrm{angle}/2)\times\mathrm{SVF}/100\%$$
(19)

This study assessed the condition and performance of six interior corridors and one bridge. The depicted settings bore a striking resemblance to real-world places. The initial passageway of a vaulted chamber encompasses a display. The second corridor serves as a means of communication between the office area and the main entrance. Three-dimensional mixed-reality simulations were developed. The test corridors exhibited average widths of 0.85, 2, 2.4, 2.75, 3.7, 7, and 14 m, and average heights of 2.4, 2.7, 2.95, 3.8, 4.9, 6.15, and 9 m, respectively.

Table 3 Zone corridor aspect ratios (The content was generated by utilizing the area pixel count of the field of view (FOV) derived from the image processing techniques outlined in this study, along with regression analysis)

3 Findings

In this section, we present an overview of the results obtained from our study. Table 4 presents a comparison between the field of view (FOV) and the virtual selective visual field (VSVF) as illustrated in Table 1. The field of view (FOV) has a significant influence on the perceived acceptability of virtual reality sickness (VSVF). The visual angle progressively increases from an initial value of 60 degrees to subsequent values of 90 degrees and 120 degrees. Despite the wide field of view of 120 degrees, the image remains unaltered.

Table 4 FOV vs VSVF

Linear regression can be employed to calculate the coefficients of an equation in the form of a + b * SVF + c * VA, where a, b, and c denote constants that represent the intercept and slope of the regression line. After determining the values of these coefficients, the equation can be utilized to compute the distance of new images based on their visual angle and sky view factor. The specific equation and methodology employed may differ depending on the particular application and the data that is accessible. However, this experiment has the potential to establish an appropriate ratio between overestimation and underestimation.

The relationship between distance and the variables sky view factor (SVF) and visual angle (VA) can be expressed as;

$$Distance = f (SVF, VA)$$
(20)

In this equation, SVF represents a value ranging from 0 to 1, VA represents the visual angle measured in degrees, and f is a function that converts the given input values into an estimated distance. The specific functional representation of f is contingent upon the dataset and the intended application. However, it can be ascertained through the utilization of regression analysis or alternative machine learning techniques. The equation can be expressed as follows:

$$ED = B (TD) + G (VA)$$
(21)

These are the variables used in the previous equation:

ED = Estimated Distance.

VA = Visual Angel.

B = the regression coefficient on true distance (TD).

G = the regression coefficient on visual angle (VA).

UE (B = 0.860, G = 0.175).

OE (B = 1.108, G = 0.164).

UE = Under Estimation.

OE = Overestimation.

IP = Image Processing

(22)
(23)

The pixel-formatted data is expected to be converted into SI units.

px to meter: (px) × (0.000264583333).

[IP]_W: Image processing width.

[IP]_H: Image processing height.

Orien = Orientation.

W: actual width.

AR: Aspect Ratio.

Cent: CentroidX.

To elucidate the interconnection among Eqs. 20, 21, 22 and 23, these equations delineate a methodology employing linear regression for the purpose of estimating distances by leveraging visual angle and sky view factor. The regression analysis yields coefficients that are subsequently employed in an equation incorporating true distance and visual angle, enabling the estimation of distances. Equations 22 and 23 seem to be integral components of a computational procedure in image processing, employed to preprocess the data in order to facilitate subsequent analysis. The precise functional representation of these relationships may differ depending on the dataset and the intended application, while the coefficients can aid in achieving a trade-off between overestimation and underestimation.

Table 5 presents the disparities between the estimated values and the actual distance ratios. Nevertheless, the standard error falls within an acceptable range for this particular test.

Table 5 Based on height and width, estimate the distance deviation error

The calculation of standard deviation and error in this study is derived from the students' distance estimation data. These statistical measures quantify the extent to which the estimated distances deviate from the actual values. The calculations were performed using Microsoft Excel.

4 Discussion

The variable VSVF plays a crucial role in the methodology employed in this study. In order to improve the accuracy of the distance estimation ratio forecasting method, it would be beneficial to ascertain its value or aspect ratio through the utilization of either Rayman's method or the approach outlined in the current literature. According to the study conducted by Matzarakis et al. (2009), The ratios of overestimation and underestimation are taken into account in order to ensure that the calculated final value falls within the range defined by these two extremes. (Levin 1993).

The deviation error and standard error are calculated based on the aspect ratio and estimated distance, as illustrated in Fig. 5. The table presents the specific measurements of corridor width and height for each research zone. The aforementioned ratios, particularly when juxtaposed with estimated values, suggest that the standard error is within the range of 1.8% to 6.9% of the true distance, which falls within an acceptable threshold. Based on the data presented in Table 5, it can be observed that both the deviation error and standard error exhibit relatively small magnitudes. This suggests that the estimation of the value is suitable and accurate.

Fig. 5
figure 5

Zone 1's maximum, minimum, and ideal distances. Distances in zone 2. Zone 3 distances. Zone 4 distances. Zone 5 distances. zone 6 distances Relationships between distance and H-W correlations

Furthermore, it is imperative to conduct a meticulous assessment of the sky view factor, given its potential variability based on factors such as illumination, time of day, presence of furniture, and other obstructive elements. Despite its inherent limitations, this particular strategy possesses the potential for utility in a wide array of scenarios, with a particular emphasis on urban environments. The efficacy of this method necessitates further enhancement and evaluation to ascertain its suitability in various sectors, including virtual reality, robotics, and autonomous vehicles.

A method for estimating visual angles involves the utilization of the provided equation to calculate the visual angle of an object;

$$\mathrm{Visual}\;\mathrm{angle}\;\mathrm{equals}\;2\;\arctan\;(\mathrm{object}\;\mathrm{size}/(2\ast\;\mathrm{viewing}\;\mathrm{distance})).$$
(24)

The term "object size" refers to the physical dimensions of an object. The viewing distance refers to the spatial separation between an individual who is observing a particular object.

In order to employ the visual angle estimation technique for determining the distance of an object within an image, it is imperative to possess prior knowledge regarding the object's dimensions. In the absence of precise measurements, it is possible to estimate the dimensions of the object under scrutiny by employing a comparative analysis with a known reference object depicted in the image. After determining the size of the object, the aforementioned equation can be employed to compute the visual angle of the object. By establishing the correlation between visual angle and distance, it becomes possible to determine the precise distance of an object. The aforementioned correlation can be expressed in the following manner:

$$\mathrm{Distance}\;\mathrm{to}\;\mathrm{object}=\mathrm{object}\;\mathrm{size}/(2\ast\tan(\mathrm{visual}\;\mathrm{angle}/2))$$
(25)

The computation of the SVF can be achieved by utilizing the subsequent formula:

$$\mathrm{SVF }= (\mathrm{A}\_\mathrm{sky }/\mathrm{ A}\_\mathrm{hemi}) * 100\mathrm{\%}$$
(26)

The term "sky" refers to the expanse of the visible atmosphere. The term "hemi" refers to the entirety of the hemispherical region that is perceptible from the vantage point of the observer.

Determine the Spatial Variation Factor (SVF) at the observer's position by utilizing the equation provided above.

Utilize the provided equation to make an estimation of the distance to the object.

$$\mathrm{Distance}\;\mathrm{to}\;\mathrm{object}=(\mathrm{object}\;\mathrm{size}/(2\ast\tan\;(\mathrm{visual}\;\mathrm{angle}/2)))/\mathrm{SVF}$$
(27)

The aforementioned equation modifies the distance estimation by incorporating the influence of the sky view factor (SVF) on the visual angle. Factors such as atmospheric conditions, illumination, and the quality of the image may potentially influence the accuracy of the outcomes.

Figure 5g illustrates that the optimal projected distance ratio lies within the range of overestimation and the actual proportion. A subsequent discovery was made. If the ratio between the average height and average width is below three, the subsequent equation can be employed to estimate the field of view (FOV) factor:

$$\sqrt[3]{} (2\mathrm{e}-\uppi )\times |\mathrm{arctan }(\mathrm{H}/3+\mathrm{W}/3-(\mathrm{H}\times \mathrm{W})/2) |\times \mathrm{HL}$$
(28)

Let e denote Euler's number, while H, W, and HL represent the average height, average width, and the limit of the horizon, respectively.

Table 4 illustrates the disparity between observed and projected ratios. This finding provides definitive evidence of a linear relationship between the variable and the prediction equation, establishing a clear connection between them. The coefficient of determination (R-square) in this particular case is 0.90, suggesting a strong correlation between the variables.

The authors generated these figures within the JW CAD environment.

5 Conclusion

The study incorporates the consideration of overestimation and underestimation ratios in order to ensure that the calculated distances ultimately fall within acceptable parameters. The accuracy of the method can be assessed by calculating deviation error and standard error, which are determined using aspect ratio and estimated distance. These measures indicate that the method's accuracy is within the range of 1.8% to 6.9% of the true distance, suggesting a satisfactory level of precision.

The study's findings indicate that the utilization of the Sky View Factor and visual angle estimation techniques shows potential for accurately determining distances in 360-degree photographs, despite the inherent difficulties associated with implementing these methods. The study underscores the necessity for further experimentation in order to ascertain the reliability of these findings.

Another outcome has emerged from the analysis, which involves the identification of a robust linear association between the variables and the predictive model. This is evidenced by a substantial coefficient of determination (R-square) value of 0.90. This finding indicates a strong correlation between the variables under investigation.

Nevertheless, this study is subject to certain limitations as a result of its execution within a controlled environment. Additional investigation is required to substantiate the results in practical situations, taking into account variables such as image quality and lighting conditions that could potentially impact the effectiveness of the visual angle estimation technique.