Advertisement

SN Applied Sciences

, 1:1686 | Cite as

Image recognition in unmanned aviation using modern programming languages

  • Tamara Oleshko
  • Dmytro Kvashuk
  • Iryna HeietsEmail author
Research Article
  • 59 Downloads
Part of the following topical collections:
  1. 3. Engineering (general)

Abstract

The paper considers the different methods of image recognition in unmanned aviation using modern programming languages. It is shown that the new era in aviation is characterized by new challenges and threats, as well as uncertainty, and it is not always possible to identify a threat through standard means of control. The authors summarize the various methodologies of analysis and justify the algorithm for recognition zones of video observation of possible icing of the surface of the aircraft. The tested methods, in general, were divided into three groups: the preliminary filtering and image preparation, the logical processing of the results of the filtering and—machine learning which in general are divided into three groups. The filtering that allows highlighting of images in the recognition area, linearization, the transformation of “Hafa” and filtering contours as a separate class of filters were selected as the main methods of filtering images. The authors propose to use a device that can determine possible areas of icing of aircraft using airborne meteorological radar. The problem is the ratio of the image, which was before the icing, and the changes in this image in the presence of ice.

Keywords

Image recognition Unmanned aviation Programming languages Unmanned aerial vehicles 

1 Introduction

The urgent need to improve the competitiveness of the global economy is gradually inclining the vector of human development toward the use of artificial intelligence in many spheres of life. Aviation is the most urgent area because a significant part of the tasks that must be solved during the flight must be performed in an automated mode. This is due to the fact that the automated systems make it possible to reduce the cost of aircraft maintenance, improve piloting accuracy, and in some cases, avoid human error, thereby reducing the risk of accidents, which generally improves control and flight control.

The use of an unmanned aerial vehicle (UAV) in modern conditions is a promising direction in any economic areas of public activity, from agriculture to the military-industrial complex. In some cases, the economic effect of UAVs is higher than the efficiency of manned aircrafts, due to their size and fuel costs, as well as the lack of costs for financing pilots and airborne personnel. However, there is a big number of unresolved issues that are caused by the specifics of the operation of UAVs, their resistance to adverse environmental conditions, as well as human factors that occur during all stages of aircraft maintenance. Thus, the new era in aviation is characterized by new challenges and threats, uncertainty, operation complexity, and insufficient amount of practical and scientific experience. All this, as well as the choice of methodology for monitoring the necessary stages of aircraft operation (from design to landing), requires optimal diagnostic algorithms, automation of control with modern software and hardware, which will enable quick and efficient adaptation of the control system to new conditions.

In many cases, it is not always possible to identify a threat through standard means of control. Pilots can diagnose visual and acoustic hazards using a manned aircraft as a result, and they can better respond to unforeseen circumstances, for example, diagnosing the icing of the flaps or the steering part of the aircraft, diagnosing inadequate operator behavior during flight control, identifying an obstacle during the passage of the air corridor, diagnosing precipitation and other natural and mechanical hazards without the use of recognition technologies based on machine vision and acoustic diagnostics.

Although this direction today is widely explored and has some experience, it is not enough to form an integrated pattern recognition system in aviation. Furthermore, the problem of diagnosing the operational stages of the UAVs also requires efficient image recognition techniques, due to a significant number of errors during maintenance, testing, piloting, takeoffs and landings, and even at the design stage of the UAV, which have both visual and acoustic features.

Thus, the future of unmanned aviation is impossible without the use of artificial intelligence and given that human capabilities can be partially replaced by a machine, visual and acoustic diagnostics of the environment must be implemented in a machine mode. Such needs condition the development of new modern approaches in the application of images recognition technologies in aviation, the selection of effective tools for their implementation, as well as practical experience.

Today, the formation of the aviation infrastructure for flight control of UAVs is questioned, but the lack of criteria for evaluating the effectiveness of these systems, the lack of standardized rules for the introduction into operation by developers of aviation flight complexes, and the insufficient level of state funding hamper the development of unmanned aircraft. Moreover, the testing period of the relevant systems is quite long, which is associated with many factors, both of a natural nature (the behavior of the UAV control system at different times of the year, resistance to freezing, high temperature, etc.), and human factors (operator errors, air traffic controllers, etc.). Consequently, the UAV management system must be adapted to possible threats and risks.

Furthermore, a significant part of the flights is carried out in the absence of radio interference, which makes it possible to obtain information on the state of the UAVs to operators of ground control stations, and despite the using of three-position compasses (which implement the flight control algorithm based on data from the Earth’s magnetic field autonomously, even in conditions of radio interference), the range of functions that are realized in conditions of radio interference is currently limited. Pattern recognition algorithms require improvement—today, it is one of the most urgent needs of mankind. Developments in this area are also at their initial stage.

Given the rapid pace of development of intelligent technologies, the issues of standardization of pattern recognition algorithms in the UAV flight control system are only a matter of time. At the same time, there is a high degree of risk associated with the reliability of these systems, on which not only the fate of the UAV depends, but also the safety of people. Hence, today there are already a significant number of UAVs that can identify video images, but, unfortunately, no developer gives one hundred percent accuracy of this system, so it is necessary to apply duplicate systems for recognizing similar images; however, using other criteria that will allow higher recognition level.

2 Literature review

The fourth technological revolution, which humanity is experiencing, is felt in all spheres of human activity: industry, economy, medicine, and social sphere. Aviation is no exception. Thus, according to the forecasts of the Ministry of Transport of Great Britain, by 2030, the socioeconomic benefit of expanding the boundaries of using unmanned aircrafts will be approximately 16 billion pounds. Cost savings can reach 42 billion pounds, which in general will create 600,000 jobs in this area [1]. According to Ministry of Transport Ministry of Business, Innovation and Employment in New Zealand [2], the potential value of the increasing line of sight drone to the global economy use over the next 25 years ranges between $2.5 billion and $3.9 billion, with dairying contributing between $1.3 billion and $1.5 billion. The European drone sector is predicted to directly employ more than 100,000 people within 20 years. This situation will have an economic impact of over €10 million per year, mainly on services [3].

Analysis of trends in this area allowed American scientists back in 2005 to determine the prospects for the development of unmanned aircrafts for 25 years [4]. And, as a result, the USA is one of the leading countries in the use of unmanned aircraft systems.

There are many positive examples, but with the development of new industry, new needs and threats arise, usually associated with errors in the design, testing, and operation stages of unmanned aircraft systems. The relevant practice of adapting new types of technology has been around for a long time, which allows for testing an aircraft in real conditions. But, unfortunately, the human factor is always present, which from time to time leads to failures of technological equipment. However, the cost of such failures in unmanned aircraft, as opposed to manned, is much higher, because the machine cannot make critical decisions that are not part of the algorithms of its actions and quite often become uncontrollable. Therefore, artificial intelligence should play a key role during pre-flight activities, which boil down to monitoring errors and technical equipment failures, both at the aircraft design stage and during its operation.

The study of the fault tolerance of unmanned aircraft paid enough attention. Among the scientists involved in this issue was Alain Hobs, who studied the mechanisms for the occurrence of failures among military, civilian, and transport unmanned aircraft. As a result of analyzing statistical data, the researcher identified the most sensitive factors of threats of failure of process equipment. The most significant was the human factor [5]. Kharchenko et al. [6] by reviewing threats in aviation noted the relevance of the scientific and practical task of creating a methodology for making optimal decisions by dispatching personnel during flight monitoring, because such decisions depend on the information provided to the controller, his professional training and psycho-physiological state, which, in general, determine the optimality of managerial actions. There is also an urgent need to optimize the stages of UAV operation. On this occasion, Matiychik identified a huge number of main operational phases of the UAV, which are carried out with the aim of ensuring the safety and resiliency of the devices. These stages include the formation of flight tasks, takeoff procedures, data transmission to ground equipment, and aircraft landing. Among the above points, an important component, according to the scientist, is to simplify the design of an aircraft in order to reduce the time for their “assembly–disassembly” from the position in which the aircraft is transported, and on the contrary, which reduces the number of verification procedures for technical requirements and then optimizes operation process. This, in turn, necessitates the automation of control over the implementation of these stages [7].

Current trends in increasing use of unmanned aircrafts [8] and the urgent needs to optimize their operation and control, define new forms of functioning of the UAVs service industry, which should include such economic activities as testing UAV, optimizing flight algorithms, optimizing software and hardware, training pilots, mechanics, and a new generation of UAV operators, as well as creation and maintenance of aerodrome infrastructures for UAVs.

Providing each of these needs requires automated control systems based on artificial intelligence, by creating tools based on modern programming languages that implement the optimal control algorithms, control, and operation of UAV. The implementation of pattern recognition mechanisms is completely dependent on the algorithms and programming languages in which they are implemented, which today is not fully investigated.

The widespread use of UAV has led to a significant number of tasks, methods of monitoring and analysis, as well as decision-making systems, which also requires the use of software algorithms based on fuzzy logic, probability theory, and chaos theory. All this should be combined into a single mechanism that ensures the efficiency of aircraft use.

Among the most common methods of pattern recognition are graphic images that implement the basic purpose of UAV—video observation.

Scientists from Florida have developed a mechanism for recognizing bird images based on the automatic determination of extreme image spectra using an onboard video recorder of UAV. The results showed recognition accuracy of over 80%. The speed of implementation of this algorithm was achieved using a programming language that is compiled into a microprocessor language (C++) [9]. Unmanned aircraft also found wide application in the field of graphics object recognition, for example, to identify damaged solar panels in solar power plants [10].

In general, modern requirements for the use of computer vision are reflected in space developments and in various types of security systems, for example in very crowded places, in order to prevent terrorism and recognition of persons who committed crimes. Of course, the accuracy of such systems also cannot be 100%, because a person can easily hide or change its look.

Leading global companies in the field of information technology are constantly working to create software libraries for pattern recognition, reducing errors, and creating new approaches to improvement. Conversely, scientists from India and Great Britain taught the neural network to recognize criminals with items of disguise. Google developed an open source machine learning library called Tensor Flow, which allows one to solve the problem of building and training a neural network in order to automatically detect and classify images.

3 Methodology

In general, the principle of recognition of graphic objects is based on contour analysis, as one of the main methods of recognition and search.

The analysis considers the contours of a graphic figure, which are obtained as a result of optimal selection of the saturation of the object; the contrast and its brightness, in general, significantly reduce the complexity of algorithms and calculations. But, the use of contour analysis has a low resistance to interference, which is a significant drawback [11, 12]. Therefore, any cross section or only partial visibility of an object leads to recognition complications. But despite this, the efficiency of using this method on clear images is quite high.

The technology of forming contour images was founded by the researcher John Canne in 1986, and the method he has proposed so far remains relevant in computer vision systems [13].

There are many different approaches to the selection of contours, but their types and classification are revealed by researcher Shih [14]. Thus, the basic mechanisms for the implementation of the selection of contours can be considered as the establishment of maximum vertices of images based on gradient optimization, and the selection of zero contours, reflecting the opposite color values. After that, in the pre-set templates, an algorithm is used that uses a Gaussian filter and allows smoothing the contours of the image [15].

Further, the image is encoded into an array of coordinates, which can then be effectively used to search for similar patterns, matching, transformations, and the like. These coding methods include Freeman’s chain code, as it allows the creation of an array of data in the form of a sequence of segments, that is, straight lines of a certain length and direction [16]. The basis of such a presentation is a four or eight-digit table, each value of which has its own binary code (Fig. 1).
Fig. 1

Image encoding by Freeman. Created based on source 14 (Freeman, H.)

The corresponding table describes the direction of the vector, which describes each pixel, and its length.

The study of this problem is devoted to the work of Vanyukov et al. [17], where the dependence of the changes of the digital code of the contour image after the change of its structure is clearly revealed, and the mechanism of contour image transformation is presented.

On the contour fixed the starting point. Then, a contour scan occurs, where each bias vector is described by the complex number a + ib, where a is the shift of the points along the X axis, and b is the displacement along the Y axis. Accordingly, the point shift is taken to the previous point. The contour is defined as a set of elementary vectors, represented by its two-dimensional coordinates. In the case of a change in the starting point, the vector kernel will be offset, and the change in the scale of the image can be considered as multiplication of the elementary vector by the scale factor.

The scalar products of the circuits N1 and the modified contour N2 can be represented as a complex number:
$$\eta = \left( {N_{1} ,N_{2} } \right) = \mathop \sum \limits_{n = 0}^{k - 1} \left( {\gamma_{n} ,\nu_{n} } \right) ,$$
(1)
where k—the dimension of the contour vector, \(\gamma_{n}\)—elementary vector of contour N1, \(\nu_{n}\)—elementary vector of contour N2, (\(\gamma_{n} ,\nu_{n}\))—scalar product of complex numbers.

Given the widespread use of contour allocation tools, the most up-to-date algorithms are Kenny and Sobel’s operator [18, 19]. Shevchenko [20] tried to compare the Kenny algorithm and the Sobel operator, which made it possible to establish that Kenny’s algorithm is faster, but the quality of the circuits is higher with use of the Sobel operator.

The basis of the Sobel operator is the method of overlaying each image point of two masks of rotation, which are orthogonal matrices, dimension 3 × 3. Such masks reveal contours located vertically and horizontally in the image.

For each point in the image, the operator uses the value of the intensity of the color brightness within the specified masks (3 × 3). These matrices are collapsed with the original image in order to determine the approximate derivatives horizontally and vertically. Thus, A is the original (initial) image, \(G_{x}\) and \(G_{y}\) are the image matrices, each point of which has approximate derivatives along the X and Y axes, respectively. The calculation of \(G_{x}\) and \(G_{y}\) can be represented as follows:
$$G_{x} = \left[ {\begin{array}{*{20}c} { - 1} & { - 2} & { - 1} \\ 0 & 0 & 0 \\ { + 1} & { + 2} & { + 1} \\ \end{array} } \right] *A,\quad G_{y} = \left[ {\begin{array}{*{20}c} { - 1} & 0 & { + 1} \\ { - 2} & 0 & { + 2} \\ { - 1} & 0 & { + 1} \\ \end{array} } \right] *A;$$
where * denotes a two-dimensional convolution operation.
In this case, the coordinate X rises to the right and Y rises down. The new image can be illustrated by summing both matrices. This requires the coordinates of the new points obtained by the following formula:
$$G = \sqrt {G_{x}^{2} + G_{y}^{2} } ,$$
(2)
Thus, G is calculated as the sum of all points (i, j) of the matrices \(G_{x}\) and \(G_{y}\).

Consequently, this operator calculates the gradient of the brightness of the image at each point and as a result allows to find the direction of the greatest increase in brightness. Thus, it makes it possible to show how “sharply” or “smoothly” the brightness of the image at each point changes.

Mathematically, the gradient function of two variables for each image point is a two-dimensional vector, whose components are derivatives of image brightness horizontally and vertically. At each point of the image, the gradient vector is oriented in the direction of the greatest increase in brightness, and its length corresponds to the magnitude of the change in brightness. This means that at the point of the constant brightness region there will be a zero vector, and at the point lying on the border of regions of different brightness—the vector crosses the boundary in the direction of increasing brightness.

The software implementation of these methods is carried out using number of libraries; the most optimal of them can be considered OpenCV, written in C/C++ and has the open source code. This library includes more than 1000 functions and algorithms, and its creation dates to 1998. Intel is considered to have begun the development, with the active participation of the developer community. In this line, there are libraries with more detailed and specialized functions and settings, such as, for example, Halcon and libmv. OpenCV can be attributed to be the most complete in terms of capabilities and number of topics on image processing. It has a free BSD license, and its use can be both free and commercial. The only requirement for a license is that the accompanying materials must contain information about this library.

Its functionality is available in various languages: C, C++, Python, CUDA, Java, etc. It is supported by operating systems such as Windows, Linux, Mac, Android, iOS.

In conclusion, summing up the above, it should be noted that the use of computer vision technology today is widely used in various fields, including during piloting of UAV. Opinions of different scientists agree that the issue of optimizing image recognition technologies should be considered at the level of the methodology, which even today is highly developed; however, there is a need to improve the existing and develop new methods of contour analysis, methods of hyperspectral analysis and video stream analysis. OpenCV library, which is universal for use by many programming languages, can be highlighted as among some of the software tools,.

4 Empirical results

The formation of templates based on which recognition of graphic images should take place depends on many factors on the basis of which it is possible to create tasks for computer vision. Therefore, it is necessary to focus on already tested methods, which in general are divided into three groups:
  • The first group: the preliminary filtering and image preparation.

  • The second group: the logical processing of the results of the filtering.

  • The third group: machine learning.

The boundaries between the groups are very conditional. To solve a problem, it is not always necessary to apply methods from all groups, as sometimes two suffice, and sometimes even one.

The main methods of filtering images can include the following:
  1. 1.

    Filtering that allows highlighting of images in the recognition area.

     
  2. 2.

    Linearization (for RGB images and images in grayscale colors).

     
  3. 3.

    The transformation of “Hafa,” which allows finding out any geometric shapes. It is an algorithm that is used in numerical methods of image processing. At the heart of this algorithm lies the mechanism for identifying lines in the image, as well as ellipses and circles.

     
  4. 4.

    Filtering contours as a separate class of filters.

     
Contours are extremely useful when moving from working with an image to working with objects in this image. When an object is rather complex, but it stands out well, often the only way to work with it is to select the contours. There are a number of algorithms that solve the problem of contour filtration:
  • Operator Kenny,

  • Sobel operator,

  • The Laplace operator,

  • Operator Pruitt,

  • Roberts operator.

In the first stage of the implementation of the operator Kenny, the image is blurred using a Gaussian blur:
$$f_{(x,y)} = \frac{1}{{2\pi \sigma^{2} }}\exp \left( { - \frac{{x^{2} + x^{2} }}{{2*\sigma^{2} }}} \right) ,$$
(3)

Then, the gradients are searched. Boundaries are marked where the image gradient is maximized.

Four filters for detecting horizontal, vertical, and diagonal edges in the blurred image are used in this algorithm. The next stage is the suppression of the maxima of the function. Thus, only local maxima are marked as contour boundaries. Subsequently, low-pass and high-pass filters are filtered to suppress noise by suppressing all edges unconnected with the main borders of the image.

The Laplace operator is invariant to rotate the image. This operator is based on the calculation of the image gradient, which is widely used methods based on Laplace image, the function of which is minimized as follows:
$$\nabla^{2} f = \frac{{\partial^{2} }}{{\partial x^{2} }}f + \frac{{\partial^{2} }}{{\partial y^{2} }}f .$$
(4)
Second-order partial derivatives are used to calculate Laplace:
$$\begin{aligned} & \frac{{\partial^{2} f}}{{\partial x^{2} }}((f(x + 1,y) - f(x,y)) - (f(x,y) \\ & \quad - f(x - l,y))) = f(x + 1,y) - 2f(x,y) + f(x - 1,y); \\ \end{aligned}$$
$$\begin{aligned} & \frac{{\partial^{2} f}}{{\partial y^{2} }} ((f(x,y + 1) - f(x,y)) - (f(x,y) \\ & \quad - f(x,y - 1))) = f(x,y + 1) - 2f(x,y) + f\left( {x,y - 1} \right) \\ \end{aligned} .$$
(5)
In this case, their sum will be equal to:
$$\begin{aligned} & \frac{{\partial^{2} f}}{{\partial x^{2} }} + \frac{{\partial^{2} f}}{{\partial y^{2} }} = f\left( {x + 1,y} \right) + f\left( {x - 1,y} \right) \\ & \quad - 4f(x,y) + f(x,y + 1) + f(x,y - 1) \\ \end{aligned} .$$
(6)
This means that, as in the other operators considered, the convolution matrix is used to calculate Laplace:
$$\nabla^{2} = \left[ {\begin{array}{*{20}c} 0 & { + 1} & 0 \\ { + 1} & { - 4} & { + 1} \\ 0 & { + 1} & 0 \\ \end{array} } \right] *A ,$$
(7)
where A is the original (raw) image.
Dr. Judy Pruitt proposed her own operator, known as Pruitt. This operator has two 3 × 3 kernels that help to collapse the image in order to calculate the approximate values of the derivatives. One value is calculated horizontally and the other value vertically. The operator is based on the concept of central difference:
$$\begin{aligned} \partial f(x,y)/\partial x = & (f(x + 1,y) - f(x - 1,y))/2; \\ \partial f(x,y)/\partial y = & (f(x,y + 1) - f(x,y - 1))/2, \\ \end{aligned}$$
(8)
Using an initial image A, images in which each point contains the horizontal and vertical approximation of the derivative \(G_{x}\), \(G_{y}\) can be calculated by the gradient of the image on the convolution matrix:
$$\begin{aligned} & G_{x} = \left[ {\begin{array}{*{20}c} { - 1} & 0 & { + 1} \\ { - 1} & 0 & { + 1} \\ { - 1} & 0 & { + 1} \\ \end{array} } \right] *A,\quad G_{y} = \left[ {\begin{array}{*{20}c} { - 1} & { - 1} & { - 1} \\ 0 & 0 & 0 \\ { + 1} & { + 1} & { + 1} \\ \end{array} } \right] *A \\ & G = \sqrt {G_{x}^{2} + G_{y}^{2} } \\ \end{aligned} .$$
(9)
The Roberts operator is called a cross-operator because of the use of cross-mathematical options over the values of image matrices. This operator is the simplest and in turn, the fastest method of contour selection. The Roberts operator is used to obtain a higher speed gradient calculation, but the accuracy decreases:
$$A = \left( {\begin{array}{*{20}c} {a_{11} } & {a_{12} } \\ {a_{21} } & {a_{22} } \\ \end{array} } \right), \quad A^{{\prime }} = \left| {a_{11} - a_{22} } \right| + \left| {a_{12} - a_{21} } \right|,$$
(10)
where A′ is the processed image, A is the original image.

The most commonly used detector is Canny, which is implemented in the OpenCV software library as this library is integrated into almost any programming language [21].

Filtering provides a set of data suitable for processing, but most often they use this data without processing it.

The transition from filtering to logic is the operation of building and erosion of binary images. These methods allow the removal of noise from a binary image by increasing or decreasing existing elements.

The logical processing of filtering results consists in determining the singular points that can be correlated with other images for similarity.

Such points are unique characteristics of the object, allowing to compare it with similar classes of objects. There are several dozens of ways that these points can be selected. Some techniques make it possible to select specific points in adjacent frames, some after a certain period and when the lighting changes, some allow the finding of specific points, even when the object turns.

The use of machine learning occurs at the last stage of image processing using methods that do not work directly with the image but allow the making of certain decisions. The learning algorithm itself builds such a model by which it will be able to analyze the new image and decide which of the objects is on the test images.

Each of these test images consists of an array of points. Their coordinates are the weight of an individual image feature. For example, signs can be: “The presence of eyes,” “The presence of a nose,” “The presence of two hands,” “The presence of ears,” and the like. All of them stand out above the detectors, which are in their comparative patterns, like human, geometric shapes.

The purpose of such a classification is to conduct filtering in the feature space of areas, as a result of which they are trying to obtain a certain accuracy of image recognition. To effectively solve such problems, they are trying to make an effective selection of classifiers that meet the desired criterion.

Aviators also work on the tasks of creating effective templates in order to effectively solve problems of terrain recognition, recognition of ground objects, and recognition of UAV defects that appear during a flight. The latter has been widely used to detect icing on the outer parts of aircraft, as this often leads to crashes and emergency situations. There are various approaches to possible warnings and identification of such dangerous phenomena. Therefore, researchers from the National Aviation University invented a device that can determine possible areas of icing of aircraft using airborne meteorological radar [22]. Although, the expediency of using machine vision for such tasks is of a controversial nature (there is a possibility of false alarms, pollution of optical devices, shadows from clouds, weather conditions, etc.); however, they could be used as a duplicate element of existing control systems.

The need to identify changes in the external parts of the UAV can be implemented by the video recording of individual (most vulnerable to icing) zones, the change of which can signal the presence of foreign objects outside the UAV.

Since icing is characterized by bulges on the surface of the object, directing light at a certain angle will cause shadow and, in turn, will create a clear contrast in colors. It is this contrast that can be identified by several machine algorithms for carrying out an effective contour analysis of graphics changes.

The corresponding procedure can be implemented using the Kenny algorithm for detecting changes in image boundaries. Video registration should be focused on individual areas of the aircraft so that foreign objects do not fall into the observation area by placing the DVRs in places of optimal observation, for example, as shown in Fig. 2. An observation zone in accordance with the DVRs located can be installed in the direction of air flow (Fig. 3).
Fig. 2

Possibility of video recorders to capture areas of possible icing (developed by authors)

Fig. 3

Zones of video observation of possible icing of the surface of the aircraft (developed by authors)

Therefore, in this case, the problem is the ratio of the image, which was before the icing, and the changes in this image in the presence of ice. An important aspect in setting the task is the optimal direction of the DVR. It should cover the entire area in such a way that it does not contain other graphic objects, because the algorithm for comparing the recorded video to the icing and after it will be implemented.

The solution to this problem can be accomplished by comparing arrays of data that describe a graphic object using almost any programming language. For example, using the Python programming language, it is possible to work with the OpenCV library. The “imread” function of this library makes it possible to convert the raster image into an array of data, where each pixel has its numerical characteristic. By passing the created arrays into the “calcHist” function, the total value of all the digital color values that are represented in the image can be obtained, followed by comparison of their histograms using the “comparHist” function. The “matchTemplate” function can be applied to match the image areas that match. The results returning by both functions can be correlated in the form of a sum, which characterizes the quantitative difference in the process of comparing two images, which can be applied to determine the changes of a homogeneous surface of the body of the aircraft if ice begins to form. So, the base image (1.jpg) can be compared to the current (2.jpg) for the discrepancy in the quantitative measurement. The code that can be used for such a comparison is given below:
Code testing made it possible to establish the following results when comparing a reference image with a modified (Fig. 4).
Fig. 4

The results of the implementation of the program code. Simulation of icing of the working surface of the wing of an aircraft and the establishment of discrepancies in the images

5 Conclusions and discussion

The problems of using machine vision in unmanned aviation are due to the probability of occurrence of errors, which requires developer’s greater accuracy of pattern recognition. Therefore, among a lot of software, there are several tools for implementing mechanisms for diagnosing graphic images. The most common are algorithms for contour analysis, which have found their application in many software libraries. Programming languages for working with them have a wide range of implementations. An example of comparing two raster images defines the broad features of Python programming language and the use of the OpenCV software library in order to detect differences in graphics objects and may be further developed during aircraft operation.

Thus, the problem of diagnostics of icing of fuselage UAV can be solved using machine vision. Experiments on this example were not carried out; therefore, the question is put forward for further development.

Notes

Compliance with ethical standards

Conflict of interest

The authors declare that they have no conflict of interest.

References

  1. 1.
    Department for Transport in the UK (2019) Taking flight: the future of drones in the UK. Retrieved June 19, 2019, from https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/771673/future-of-drones-in-uk-consultation-response-web.pdf
  2. 2.
    Ministry of Transport Ministry of Business, Innovation and Employment in New Zealand (2019) Drones: benefits study high level findings. Retrieved June 25, 2019, from https://www.transport.govt.nz/assets/Import/Uploads/Air/Documents/03ee506069/04062019-Drone-Benefit-Study.pdf
  3. 3.
    European Commission (2019) Unmanned aircraft. Retrieved July 10, 2019, from https://ec.europa.eu/growth/sectors/aeronautics/rpas_en
  4. 4.
    Department of Defense in the USA (2005) Unmanned aircraft systems roadmap 2005-2030. Retrieved July 5, 2019, from https://fas.org/irp/program/collect/uav_roadmap2005.pdf
  5. 5.
    Hobbs A, Herwitz SR (2006) Human challenges in the maintenance of unmanned aircraft systems. Interim Report to FAA and NASAGoogle Scholar
  6. 6.
    Kharchenko V, Shmelova T, Sikirda Y (2011) Methodology for analysis of decision making in air navigation system. Bicник Haцioнaльнoгo Aвiaцiйнoгo Унiвepcитeтy 48(3):85–94Google Scholar
  7. 7.
    Matiychik M, Lavryk T, Suvorova N, Kachalo I (2009) Peculiarities of the process of aerospace work execution by unmanned vehicles. Bull Natl Aviat Univ 40(3):44–49Google Scholar
  8. 8.
    Amoukteh A, Janda J, Vincent J (2017) Drons go to work. Retrieved June 19, 2019, from http://image-src.bcg.com/Images/BCG-Drones-Go-to-Work-Apr-2017_tcm9-151218.pdf
  9. 9.
    Abd-Elrahman A, Pearlstine L, Percival F (2005) Development of pattern recognition algorithm for automatic bird detection from unmanned aerial vehicle imagery. Surv Land Inf Sci 65(1):37Google Scholar
  10. 10.
    Kumar NM, Sudhakar K, Samykano M, Jayaseelan V (2018) On the technologies empowering drones for intelligent monitoring of solar photovoltaic power plants. Procedia Comput Sci 133:585–593CrossRefGoogle Scholar
  11. 11.
    Deepface [Electronic resource]. Retrieved June 19, 2019, from https://deepface.ir
  12. 12.
    TensorFlow open source machine learning engine. Retrieved June 19, 2019, from https://www.tensorflow.org/
  13. 13.
    Tu Z, Chen X, Yuille AL, Zhu SC (2005) Image parsing: unifying segmentation, detection, and recognition. Int J Comput Vis 63(2):113–140CrossRefGoogle Scholar
  14. 14.
    Shih FY (2010) Image processing and pattern recognition: fundamentals and techniques. Wiley, HobokenCrossRefGoogle Scholar
  15. 15.
    Buryachenko V (2014) Blur elimination algorithm for video sequences of static scenes based on the application of the Gaussian anisotropic filter. Reshetnev’s Read 2(18):235–236Google Scholar
  16. 16.
    Freeman H (1962) On the digital computer classification of geometric line patterns. Proc Natl Electron Conf 18:312–324Google Scholar
  17. 17.
    Vanyukova D, Popov S, Sokolov P (2014) Combination of a digital cartographic image of the terrain with a radar image [Electronic resource]. In: Materials of the XVI conference of young scientists “Navigation and traffic management”, St. Petersburg. Retrieved June 21, 2019, from http://www.elektropribor.spb.ru/kmu2014/ref
  18. 18.
    Sobel I, Feldman G (1968) A 3x3 isotropic gradient operator for image processing. In: A talk at the Stanford artificial project in, pp 271–272Google Scholar
  19. 19.
    Gonzalez RC, Wintz P (1977) Digital image processing. Applied mathematics and computation, vol 13. Addison-Wesley Publishing Co., Inc., Reading, p 451Google Scholar
  20. 20.
    Shevchenko E (2013) The task of recognizing the contour of the palm on complex images. Artif Intell 2013(4):244–251Google Scholar
  21. 21.
    The official OpenCV library support site. Retrieved June 21, 2019, from https://opencv.org/
  22. 22.
    Patent of Ukraine 100763 U, IPC G01S 13/95 (2006.01), G01S 13/95 (2006.01)/Device for determining areas of possible icing of planes and helicopters/O. Petercev, F. Yanovsky/Declared Feb 19, 2015, pub. Aug 10, 2015, Bul. No. 15Google Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.National Aviation UniversityKievUkraine
  2. 2.RMIT UniversityMelbourneAustralia

Personalised recommendations