1 Introduction

Optical image analysis [1] is kind of the bottom of the image processing technology. The technique is based on the image of the specific optical characteristics of [2] image is defined as a number of independent image range. It is widely used in image transmission, monitoring and identification, and video processing [3]. However, the optical principle of embedded image analysis can effectively improve the accuracy of image recognition [4], but there is the problem of low efficiency and high computational complexity.

About optical embedded scheme, the fabrication of planar multimode waveguides within thin glass foils based on a two-step thermal ion exchange process was reported on article [5]. Algorithms were designed by Gong et al. [6] for both transparent and opaque virtual optical network embedding over flexible-grid elastic optical networks.

About the image fusion, an information-based approach was proposed in article [7] for assessing the fused image quality by the use of a set of primitive indices which can be calculated automatically without a requirement for training samples or machine learning. An area-based image fusion algorithm was presented by Byun et al. [8] to merge synthetic aperture radar and optical images. The mid-wave infrared, long-wave infrared, and dual-band images were obtained and evaluated in article [9] from a voltage tunable quantum dot focal plane array. The novel tone mapping algorithm based on fast image decomposition and multi-layer fusion was proposed in article [10] for solving the low efficiency and color distortion in several typical tone mapping operators for high dynamic range images. The multimodal fusion framework was presented by using the cascaded combination of stationary wavelet transform and nonsubsampled transform domains for images acquired using two distinct medical imaging sensor modalities [11].

The rest of the paper is organized as follows. Section 2 describes the optical image embedded analysis model. We give the mobile embedded image fusion mechanism in Section 3. In Section 4, the fusion algorithm analysis and verification was completed. Finally, the paper was concluded in Section 5.

2 Optical image embedded analysis model

Optical image space is a three-dimensional space coordinate system. Each sample point of the system represents a spot. Light points of the image were defined by three quantities: light intensity, image concentration, and light. The distortion degree of the spot is very important for the identification and the distributed transmission of the optical image. Suitable optical image space can provide the accurate and abundant information.

In general, the distortion of light spot is a basic condition and essential feature of an object, which could be extracted accurately from the complex environment. At the same time, the distortion of the spot will seriously affect the accuracy and integrity of the image perception. The composition of the light spot is formed by mixing the three datums of the arc, the intensity, and the concentration. When the image was analyzed, the basic optical space must be based on the above three datums. The influence analysis of other optical reference spatial factors can generally use the formula from radians, strength, and concentration of conversion, such as nonlinear concentration curve of linear transformation and fiber index for conversion.

Figure 1 shows the change process of the light spot in two different states. The light spot of state 1 is mainly composed of four direction embedding. The light spot of state 2 is mainly embedded in eight directions. The diversity performance of the three baseline elements includes a spot of light in each direction. The square in Fig. 1 represents a direction image vector of the light spot. In the box, the black dot indicates the spot in the direction of the reference vector elements of diversity. Formula (1) solved the problem of light intensity LIspot acquisition of light spot. The surface integral and nonlinear transformation of the spot image concentration ICspot could be obtained by formula (2). Light curve LRspot can be obtained by formula (3).

Fig. 1
figure 1

Spot transfer in image embedded analysis

$$ \left\{\begin{array}{l}{\mathrm{LI}}_{\mathrm{spot}}=\frac{1}{D_1+{D}_2}\left(\alpha {\displaystyle \sum_{i=1}^4{\mathrm{LI}}_i+\beta {\displaystyle \sum_{j=1}^8{\mathrm{LI}}_j}}\right)\hfill \\ {}{\mathrm{LI}}_4=\sqrt{D_1}\hfill \\ {}{\mathrm{LI}}_8=\sqrt{D_2}\hfill \end{array}\right. $$
(1)

Formula (1) applies only to the two states of the light spot conversion shown in Fig. 1. Among them, D 1 represents the image data vector of spot 1. D 2 represents the image data vector of spot 2. α is the curved surface of light spot 1. β is the curved surface of the light spot 2. Of course, in general, the solution scheme can be extended to any light spot intensity analysis after changing the light diversity direction of the two points. We obtained the light intensity LI4 with four light samples from D 1. We obtained the light intensity LI8 with eight light samples from D 2.

$$ \left\{\begin{array}{l}{\mathrm{IC}}_{\mathrm{spot}}=\frac{1}{D_1+{D}_2}\left({\displaystyle {\iint}_{S_1}f\left({x}_1,{y}_1,{z}_1\right)d{S}_1}+{\displaystyle {\iint}_{S_2}f\left({x}_2,{y}_2,{z}_2\right)d{S}_2}\right)\hfill \\ {}S=\sqrt{\frac{x^2+{y}^2+{z}^2}{xyz}}\hfill \end{array}\right. $$
(2)

Here, x, y, and z are the three system coordinates of the spot in the three space systems. S 1 represents one of the areas of the spot. S 2 represents two of the areas of the spot. Area can be obtained by calculating x, y, and z.

$$ \left\{\begin{array}{l}{\mathrm{LR}}_{\mathrm{spot}}=\frac{\sqrt{H_1+{H}_2}}{{\left({H}_1{H}_2\right)}_{\min }}\\ {}H=\frac{1}{k}{\displaystyle \sum_{i=1}^k\left({\mathrm{LI}}_{\mathrm{spot}}(i)+{\mathrm{IC}}_{\mathrm{spot}}\left(i-1\right)\right)}\end{array}\right. $$
(3)

Here, H represents the 3D space of the interval optical image coordinate set. k represents the space occupied by the spot.

On the basis of formulas (1), (2), and (3), combined as formula (4), the optical image analysis of the color filter model can be obtained through the linear transformation of the curve.

$$ \left\{\begin{array}{l}\left[\begin{array}{c}\hfill R\hfill \\ {}\hfill G\hfill \\ {}\hfill B\hfill \end{array}\right]=\left[\begin{array}{c}\hfill {\mathrm{LI}}_{\mathrm{spot}}\hfill \\ {}\hfill {\mathrm{IC}}_{\mathrm{spot}}\hfill \\ {}\hfill {\mathrm{LR}}_{\mathrm{spot}}\hfill \end{array}\right]\left[\begin{array}{c}\hfill {D}_{\mathrm{R}}\hfill \\ {}\hfill {D}_{\mathrm{G}}\hfill \\ {}\hfill {D}_{\mathrm{B}}\hfill \end{array}\right]\hfill \\ {}{D}_{\mathrm{C}}={\displaystyle {\int}_{\lambda }f\left(\lambda \right)d\lambda}\hfill \end{array}\right. $$
(4)

Here, D R represents the red data vector. D G represents the green data vector. D B represents the blue data vector. The above parameters can be obtained by calculating D C.

On the basis of formulas (1), (2), and (3), combined as formula (5), the optical image analysis model can be obtained by using the nonlinear concentration of fiber refraction.

$$ \left\{\begin{array}{l}\left[\begin{array}{c}\hfill {\lambda}_x\hfill \\ {}\hfill {\lambda}_y\hfill \\ {}\hfill {\lambda}_z\hfill \end{array}\right]=\left[\begin{array}{c}\hfill {\mathrm{LI}}_{\mathrm{spot}}\hfill \\ {}\hfill {\mathrm{IC}}_{\mathrm{spot}}\hfill \\ {}\hfill {\mathrm{LR}}_{\mathrm{spot}}\hfill \end{array}\right]\left[\begin{array}{c}\hfill R{F}_x\hfill \\ {}\hfill R{F}_y\hfill \\ {}\hfill R{F}_z\hfill \end{array}\right]\hfill \\ {}R{F}_n={\displaystyle {\int}_{\lambda_n}f\left({\lambda}_n\right)d{\lambda}_n}\hfill \end{array}\right. $$
(5)

Here, RF is the optical fiber refractive index. λ x represents the X direction of the light wavelength. λ y represents the Y direction of the light wavelength. λ z represents the Z direction of the light wavelength.

3 Mobile embedded image fusion mechanism

The embedded analysis model based on an optical image has many advantages, but it also brings some disadvantages. The embedding relation between the spot and the spot is neglected. Image embedded analysis process cannot effectively solve the image forming problem. In view of these shortcomings, we use a mobile embedding scheme to map multiple light points into the same optical surface. This can effectively reduce the embedding relationship between the ratio of redundant points and the fusion point. Optical crowd block structure was added into the optical image system. The structure can improve the continuity of the image regions in different coordinate systems. The crowd block can weaken the fine difference between different coordinate systems and the distortion redundancy area. The generated image interval number can be reduced by moving embedded analysis with image fusion.

On the basis of the above mechanism, the optical image embedded optimization analysis model and algorithm structure are shown in Fig. 2. The four states cannot be converted to each other in Fig. 2. This updating process is helpful to balance the embedded iterative process and the computation of image mining. The crowd fusion scheme is helpful to improve the performance of the algorithm in order to increase the performance of the algorithm.

Fig. 2
figure 2

Mobile embedded image crowd fusion mechanism

Assume that R W is the amplitude of the optical wavelength initialization. R 0 represents the initial wavelength vector. δ denotes the wavelength jitter weight. θ denotes the amplitude of the change in the amplitude of the optical image represented by the amplitude change of the optical image R WV as shown in formula (6).

$$ \left\{\begin{array}{l}{R}_{\mathrm{W}\mathrm{V}}={R}_{\mathrm{W}}\left[-\frac{\sqrt{\left(\left|{R}_0-{R}_{\mathrm{W}\mathrm{V}}\right|\right)}}{\delta \sin \theta}\right]\hfill \\ {} \ln \left({R}_{\mathrm{W}}\right)=-\frac{{\left({\lambda}_0-{\lambda}_n\right)}^2}{2\delta}\hfill \end{array}\right. $$
(6)

Therefore, image fusion of optical images after the operation and finishing can be obtained as formula (7) as shown in the optical image vector I O.

$$ \left\{\begin{array}{l}{I}_{\mathrm{O}}={\displaystyle {\int}_0^nf\left(x,y,z\right)df}\hfill \\ {}{I}_{\mathrm{O}}=\frac{I_0}{2\delta \lambda } \ln \left[{R}_{\mathrm{WV}}+{\displaystyle \sum_{i=1}^n{R}_{\mathrm{WV}}(i)}\right]\hfill \end{array}\right. $$
(7)

4 Fusion algorithm analysis and verification

In order to analyze the performance of the embedded analysis and image fusion, we construct the three-dimensional coordinates of the image according to formulas (1), (2), and (3). The image spot of the space is connected with a light spot surface according to a specific neighborhood system. A subset of the space system would be searched with the crowd fusion. The fusion section appears in the process of embedding and updating the light spot surface. Each spot surface is mapped to the image array, as shown in Fig. 3. The embedding and fusion interval can be found after several iterations. The embedding and fusion process can integrate the optical image space of different states into one space. The experimental environment is described in Table 1.

Fig. 3
figure 3

Embedding and fusion interval

Table 1 Parameter settings

In order to facilitate comparison and analysis, we statistically analyze the original image, the embedded image analysis results, and the results of the proposed ICF-OE algorithm. Figures 4, 5 and 6 show the performance of the fusion mechanism of ICF-OE algorithm. It is found that the analysis result of ICF-OE (Fig. 5) is much closer to the original effect (Fig. 4) and better than the result of Image of embedded scheme (Fig. 6). Optical crowd block structure is for increasing image data in an optical image system, through the structure in different coordinates of the image area between the establishments of efficient continuity of protection. The swarm intelligence blocks not only weaken the fine differences between different coordinate systems but also help to reduce the redundant area of distortion. Therefore, image fusion can improve the performance of the image generated by image fusion.

Fig. 4
figure 4

Original image

Fig. 5
figure 5

Image of ICF-OE

Fig. 6
figure 6

Image of embedded scheme

Image reduction accuracy is shown in Fig. 7. In the graph, the X coordinate axes represent the number of iterations. The Y coordinate axes represent image restoration accuracy. From the comparison results, it is found that the process of image embedded analysis cannot solve the problem of image forming. ICF-OE based on the mobile embedding scheme and the optical mapping scheme, the optical surface of high precision spot mapping is established. This can effectively reduce the ratio of redundant light points and the embedded relationship between the light points, so as to improve the accuracy of image restoration.

Fig. 7
figure 7

Image reduction accuracy

5 Conclusions

In order to improve the execution efficiency and reduce the computational complexity, we design an image fusion mechanism based on optical embedded system. First of all, based on the three-dimensional space of the optical image in the interval coordinates system, each sample point is defined as a light point. The light spot is defined as the intensity of illumination, the concentration of the image and the curve of the light. An embedded optical image analysis model is proposed. Secondly, based on the moving embedding scheme, the optical surface of multiple light points is integrated. Optical crowd block structure was used to increase the image data in optical image system. The mechanism of crowd fusion for mobile embedded images is proposed. Finally, compared with the embedded image analysis mechanism, the proposed mechanism has the advantages of high recognition accuracy, low computational complexity, and low redundancy.