Abstract
Detection and localization of astronomical objects are two of the most fundamental topics in astronomical science where localization uses detection results. Object localization is based on modeling of point spread function and estimation of its parameters. Commonly used models as Gauss or Moffat in objects localization provide good approximation of analyzed objects but cannot be sufficient in the case of exact applications such as object energy estimation. Thus the use of sophisticated models is upon the place. One of the key roles plays also the way of the objective function estimation. The least square method is often used, but it expects data with normal distribution, thus there is a question of a maximum likelihood method application. Another important factor of presented problem is choice of the right optimization method. Classical methods for objective function minimization usually require a good initial estimate for all parameters and differentiation of the objective function with respect to model parameters. The results indicated that stochastic methods such as simulated annealing or harmony search achieved better results than the classical optimization methods.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Avoid common mistakes on your manuscript.
1 Introduction
Astronomy is a natural science that deals with the study of objects such as planets, moons, stars, etc. Identification of mentioned objects is one of the most fundamental topics in astronomical science and the branch dealing with object identification is called astronomical photometry. Identification can be understood as a sequence of two steps. The first step is object detection to find the regions where object occurs. The next step is its localization (modeling), i.e., we want to get exact information about its parameters. Those information can be further used in energy estimation or objects’ interactions where we want to know how close the objects are we able to distinguish using their deconvolution.
Previously mentioned objects are far away from the Earth and thus they appear as a bright point on the night sky. When an astronomical image, Fig. 1, is acquired, the situation is quite different and the bright point has changed and as a result the image will appear as smeared pattern. This is caused by passing of original information through the imaging system (Buil 1991; Howell 2006) used for an image acquisition. The result shape of captured objects is given by the impulse response of this system, also called Point Spread Function (PSF). PSF of applied imaging system is influenced by many factors and is composed as a convolution of particular PSFs of system’s components. The knowledge or a good estimate of the result system’s PSF plays a key role in astronomical photometry (Sterken and Manfroid 1992; Budding and Demircan 2007), when we want to know accurate information about observed objects.
The main aim of this article is to introduce different approach to modeling of result PSF and compare it with commonly used methods. There are introduced physical phenomena of interference on the thin lens, atmospheric turbulence and focusing, which influence resulting PSF. Except for these simple phenomena description, there is also described method of their combination in the frequency domain using convolution theorem. Optimization of described PSF models is based on authors’ previous work dealing with detection (Mojzis et al. 2012) and identification (Mojzis et al. 2012) of analyzed objects based on analysis of dark and light frames (Howell 2006) using hypotheses testing. This topic is mentioned in Sect. 2 to provide a general overview how the authors reached the objective function derived in Sect. 3.2.
Analyzed images can be considered as functions with many local minima and maxima. Thus the second aim of this work is an application and comparison of classical optimization methods and optimization heuristics. The authors want to show that the second approach is more suitable with respect to the finding of the best solution and repeatability of obtained result.
2 Object detection via multiple hypothesis testing
Astronomical systems represent special kind of imaging systems that use CCD image sensors (Buil 1991). Processing of images acquired by them assumes good knowledge of their properties. Analysis of these systems can give the answer on many questions about application of algorithms for further processing of acquired data. CCD sensor works as a photon counter, thus it can be supposed that the output of astronomical imaging systems is a Poisson random variable.
Analysis of these systems is based on the dark frames processing. The important question about the used CCD sensor is if it has the same properties in each image cell (pixel). The answer on this question is given by a statistical test described in the following section.
2.1 Noise model
Astronomical images can be expressed in mathematical way as follows
where \(f(k,l)\) are the data and \(n(k,l)\) represents noise called the dark current. This type of noise is caused by thermally generated charge, due to the long exposure times. Dark current should be simply removed by a dark frame, which maps mentioned thermally generated charge in CCD sensor. It can be considered that this type of noise is Poisson distributed (Mojzis et al. 2012) in the following way
where \(\lambda \) (k,l) is expected number of occurences in the CCD pixel cell \((k,l)\) and \(\lambda \in \mathbb {R}_0^+\). This claim can be verified on a sample of the dark images by a statistical test for the Poisson probability distribution, which can be found in Mojzis et al. (2012), Brown and Zhao (2001).
In the following text, we will consider an average dark frame
where \(m\) is the number of dark images, \(n_i(k,l)\) is a noise in \(i\)th frame and further we can assume that \(d(k,l) = \hat{\lambda }(k,l)\) is an estimate of \(\lambda \) parameter.
2.2 Tests for the Poisson distribution
Let \(x_1,\ldots ,x_n\) be independent non-negative integer valued random variables and let us consider the null and the alternative hypothesis (Brown and Zhao 2001) as
Following test is based on the Anscombe transform (Brown and Zhao 2001; Surhone et al. 2010) that presents second-order stabilizing transform for a Poisson variable. It is given by
where \(x_{i}\) are Poisson distributed data and \(y_{i}\) are transformed data with approximately constant standard deviation.
The test statistic can be based on Eq. (6) and expressed as follows (Brown and Zhao 2001)
where \(\overline{y}\) is the mean value of all \(y_i\). Equation (7) suggests that \(y_{i}\) is approximately normal with variance 1/4 and mean (Brown and Zhao 2001)
where \(\lambda _{i}\) is expected number of occurrences and \(\lambda \in \mathbb {N}\).
Under assumption of Eqs. (4) and (5) it can be considered that \(H_{0}\) is true when \(T\) has approximately a \(\chi ^2\) distribution with \((n-1)\) degrees of freedom. Thus \(H_{0}\) is rejected if \(T > \chi ^{2}_{n-1;1-\alpha }\). The approximate \(p\) values (Brown and Zhao 2001) becomes
where \(\Phi \) is PDF of the standard normal distribution (Papoulis and Pillai 2002), \(\rho (\lambda )\) and \(\xi (\lambda )\) are, respectively, expressed as
where D is the variance. If \(n\) is large then \(\lambda \) in Eqs. (10) and (11) can be replaced by its estimate \(\hat{\lambda } = \overline{x}\) (Brown and Zhao 2001; Papoulis and Pillai 2002).
2.3 Multiple hypothesis testing
When multiple hypotheses are tested, it is necessary to control the portion of incorrectly rejected null hypotheses (Papoulis and Pillai 2002; Efron 2010) (type I errors). One of the procedures used for this purpose is FDR. It is not so stringert compared to Familywise Error Rate procedures (FWER). FDR is given by
where \(V\) and \(S\), respectively, are the numbers of false positive (Type I error) and true positive (Efron 2010) hypotheses and \(R = V+S\). Evaluation of FDR can be based on the Bonferroni correction, which presents multiple-comparison correction. That is, when several dependent or independent statistical tests that are being performed simultaneously. FDR with Bonferroni correction is based on rejection rule
where \(k = 1,\ldots ,n\), \(n\) is the number of tested hypotheses and \(\alpha \) is the significance level (Papoulis and Pillai 2002), usually \(\alpha = 0.05\).
A related correction, called the Sidak’s correction (Efron 2010), gives a weaker but valid bound than the Bonferroni correction and assumes that the individual tests are independent. This is given by
Critical values \(p_{k, {\mathrm {crit.}}}\) create a curve that can or cannot cross original sorted \(p\) values in ascending sequence. The number of \(p\) values, that occur under this curve, presents the real number of hypotheses, which can be really rejected and are statistically significant. The portion between correctly rejected and previously rejected \(H_0\) presents the FDR which should be less than \(\alpha /2\).
2.4 False discovery rate detection
This approach compares information from light and dark images through tools of mathematical statistics. Let us consider that Poisson probability distribution, under certain conditions, approximates Negative binomial (NB) distribution, which is a discrete probability distribution of the number of successes \(\kappa \) in a sequence of Bernoulli trials before a specified (non-random) number of failures occur. Probability mass function (PMF) of NB distribution (Johnson et al. 2005) is given by
where \(q \in \langle 0,1\rangle \) is the probability of success in each trial and \(x \in \mathbb {N}\), \(\kappa > 0\) is number of failures until the experiment is stopped.
Let us further consider a sequence on NB distributions where \(\kappa \rightarrow \infty \) and probability of success goes to zero in such a way as to keep the mean \(\lambda \) of the distribution constant. Parameter \(q\) will have to be
This parametrization allows to express the PMF as follows
Assume that \(\kappa \Rightarrow \infty \), then the second factor is going to one and denominator of the third factor to exponential function
which is the probability mass function of Poisson distribution with expected value \(\lambda \). This leads to the conclusion that NB distribution converges to the Poisson distribution where \(\kappa \) controls the deviation from Poisson and it can be written as
Cumulative distribution function (CDF) of NB distribution is then given by
As mentioned, object detection may be based on comparison of dark and light frames using assumption that \( H_{0}:\;x \sim {\mathrm {Poisson}}(\lambda )\) and \(\lambda > 0\) then the approximate \(p\) value using Eq. (20) may be written as
where \(q = N/(N+1)\), \(\kappa = \hat{\lambda }N+\Delta \), \(N\) is number of dark frames, \(\hat{\lambda }\) is estimate of \(\lambda \) and \(\Delta \) is equal to \(1\).
When the Sidak’s correction is applied to the \(p\) values given by the Eq. (21), then the values that occur under \(p_{{\mathrm {crit.}}}\) curve are statistically significant. These values thus represent areas of the light image where objects may occur.
3 Object modeling
Astronomical photometry based on the two-dimensional fitting uses the hypothesis that the profiles of astronomical point sources which are imaged on two-dimensional arrays are commonly referred to as PSF (Howell 2006; Starck and Murtagh 2006)
where \(*\) is a convolution operator and \(x\), \(\mathrm {object}\), \(\mathrm {PSF}\) are 2D functions, which represent the result image, the original object and the system response, respectively. PSFs can be modeled by a number of simple or more complex mathematical functions that are derived from deeper knowledge of studied problem.
3.1 Basic PSF models
Statistical models based on different PSFs are commonly used for objects localization in astronomical science. There are usually applied two simple models, i.e., the first one is two-dimensional Gaussian function (Sterken and Manfroid 1992)
where \(A\) is amplitude, \(x_0\), \(y_0\) are shifts in the \(x-y\) plane, \(\sigma > 0\) is its standard deviation and \(k\), \(l\) are pixel indices as coordinates.
The second model is statistical model described by Moffat (1969, Sterken and Manfroid 1992), which is a generalization of Cauchy distribution
where \(\beta \) is a shape parameter of PDF satisfying \(0\le \beta \le 50\).
As mentioned, these two presented models are commonly used in astronomical photometry using Eq. (22), but do not comprise some important facts. If we consider that the light passes through the optical system before incidence onto the image sensor, than it is possible to make an approximation of the result system PSF by diffraction of circular aperture (Sharma 2006)
where \(I_0\) is the maximum intensity of the pattern, \(J_1\) is Bessel function of the first order, \(k = 2\pi /\lambda \), \(\lambda \) is the wavenumber, \(a\) is the radius of the aperture and \(\theta \) is the angle of observation.
From physics, it is known that the diffraction phenomenon described by Eq. (25) is accompanied by the interference phenomenon (Sharma 2006) and thus we can call this relation as interference model, which is in the frequency domain expressed as
where \(\Omega > 0\) is frequency and therefore adequate PSF is
where \(\mathcal {H}\) is Hankel transform (Andrews and Shivamoggi 1999).
Another important factor that influences result image is passing of the information through the atmospheric conditions. Thus important phenomenon that influences result image is atmospheric turbulence. According to McMinn (2006), we can express frequency spectrum of the turbulence as
where \(\psi \) is the scale of the turbulence and \(\sigma > 0\) represents the standard deviation of the velocity disturbance.
The last phenomena described in this article and that can influence result PSF is focusing (Birney et al. 2006; Fischer et al. 2008), which can be expressed as follows
where \(\rho > 0\) is focusing radius and
In this article, there are compared first two commonly mentioned models, i.e., Gauss and Moffat with more sophisticated models given by convolution of interference and turbulence (INTERTURB) or convolution of interference and focusing (INTERFOC). These combined models can be, respectively, written in the space domain as
and
which can be expressed in the frequency domain, with respect to the convolution theorem, as multiplying of Fourier images
or
Because analytical convolution of \(S(\omega )\) to \({\mathrm {s}}(r)\) is impossible, thus the data analyzed by combined models are processed only in the frequency domain, subsequently transformed into space domain and properly modified for amplitude A, \(x_0\) and \(y_0\) shifts optimization.
3.2 Objective function definition and its optimization
Optimization of models introduced in Sect. 3.1 is based on minimization of objective function. Its inference can be performed using Least Square Method (LSM), but as mentioned in Mojzis et al. (2012), data acquired by astronomical CCD camera are Poisson distributed. Thus we can use different approach based on Maximum Likelihood Estimate (MLE), see Mojzis et al. (2012).
In statistics, MLE is a method of estimating the parameters of a statistical model, Eq. (23). When it is applied to a data set and given statistical model, MLE provides estimates for the model’s parameters. For a fixed set of data and certain statistical model, it produces a distribution that gives to the measured data the greatest probability, i.e., estimated parameters maximize the likelihood function (Pawitan 2001; Severini 2001).
Let us now consider astronomical image defined by Eq. (1), noise model, Eq. (2) and average dark, Eq. (3).
Model image with astronomical objects described by PSF model can be derived from Eq. (1) by replacing expression \(f(k,l)\) in the following way
where \((k,l) \in \mathbb {D}^{M \times N}\), \(M\) and \(N\) are dimensions of the rectangle region of interest \(\mathbb {D}\) and \(f(k,l,{\varvec{p}})\) is PSF model of astronomical object with vector of parameters \({\varvec{p}}\).
When it is supposed that the data \(x\) are Poisson distributed with number of occurrences \(\lambda \)
then it is possible to write that
When the MLE is used to Eq. (38) and \(x\) is replaced by Eq (36), then the opposite likelihood function can be written as
where \(x(k,l)\) is the analyzed light image, \(d(k,l)\) presents appropriate average dark frame and \(f(k,l,{\varvec{p}})\) is the diffusion model, whereof parameters are estimated.
Combination of Eqs. (38) and (39) leads to the final form of function \(\phi \)
where \(c\) is some constant. The constant \(c\) is only data depending and can be set to satisfy \(\phi \ge 0\) and obtain
3.3 Optimization methods
For the purpose of the objective function optimization, there was applied fmincon function. This function is based on derivatives evaluation, which may be problem in the case of discontinuous functions and also good initial estimates of optimized parameters are necessary. Another problem can occur with finding of local minima except global minima of evaluated function. This is the reason, why there were optimization heuristics applied and compared too.
3.3.1 Controlled Random Search
Control random search (CRS) (Price 1977) is based on random search (RS) principle, but it combines the random search and mode-seeking routines into a single continuous process. CRS algorithms are population-set-based algorithms specially developed for treating global optimization problems. Like genetic and differential evolution algorithms, a CRS aims to maximize (or minimize) a certain objective function between members of an evolving population of trial solutions.
Random Search (Rastrigin 1963) is a family of numerical optimization methods that does not require the gradient of the problem to be optimized, and RS can hence be used on functions that are not continuous or differentiable. Such optimization methods are also known as direct-search, derivative-free, or black-box methods.
The name Random Search is attributed to Rastrigin (1963) who made an early presentation of RS along with basic mathematical analysis. RS works by iteratively moving to better positions in the search space, which are sampled from a hypersphere surrounding the current position.
3.3.2 Cuckoo Search
Cuckoo Search (CS) (Yang and Deb 2009; Gandomi et al. 2013) is inspired by the behavior of cuckoo species by laying their eggs in the nests of other birds. Some of the birds can be in the direct conflict with the cuckoo and can either throw these alien eggs away or simply abandon its nest and build a new nest somewhere else.
Each egg in a nest represents a solution, and a cuckoo egg represents a new solution. The aim is to use the new and potentially better solutions (cuckoos) to replace a not-so-good solution in the nests. In the simplest form, each nest has one egg. The algorithm can be extended to more complicated cases in which each nest has multiple eggs representing a set of solutions.
3.3.3 Harmony Search
Harmony Search (HS) (Geem et al. 2011; Geem 2009) is inspired by musician improvisation process. It imitates the natural phenomenon of musicians’ behavior when they cooperate the pitches of their instruments together to achieve a fantastic harmony as measured by esthetic standards. This musicians’ prolonged and intense process led them to the perfect state. It is a very successful metaheuristic algorithm that can explore the search space of a given data in parallel optimization environment, where each solution (harmony) vector is generated by intelligently exploring and exploiting a search space.
3.3.4 Simulated Annealing
Simulated Annealing (SA) (Kirkpatrick et al. 1983; Laarhoven and Aarts 1987) is a generic probabilistic metaheuristic for the global optimization. It is inspired by annealing in metallurgy, a technique involving heating and controlled cooling of a material to increase the size of its crystals and reduce their defects.
The state of some physical systems, and the function \(F(x)\) to be minimized is analogous to the internal energy of the system in that state. The goal is to bring the system, from an arbitrary initial state, to a state with the minimum possible energy.
At each step, the SA heuristic considers some neighboring state \(s'\) of the current state \(s\), and probabilistically decides between moving the system to state \(s'\) or staying in state \(s\). These probabilities ultimately lead the system to move to states of lower energy. Typically this step is repeated until the system reaches a state that is good enough for the application, or until a given computation budget has been exhausted.
3.3.5 Artificial Bee Colony algorithm
Artificial Bee Colony (ABC) (Karaboga 2005; Karaboga and Bastruk 2008) is an optimization algorithm based on the intelligent foraging behavior of honey bee swarm.
In the ABC model, the colony consists of three groups of bees: employed bees, onlookers and scouts. It is assumed that there is only one artificial employed bee for each food source. In other words, the number of employed bees in the colony is equal to the number of food sources around the hive. Employed bees go to their food source and come back to hive and dance on this area. The employed bee whose food source has been abandoned becomes a scout and starts to search for finding a new food source. Onlookers watch the dances of employed bees and choose food sources depending on dances.
3.3.6 Backtracking Search algorithm
Backtracking Search Algorithm (BSA) (Knuth 1968; McGregor 1982) is a general algorithm for finding a solution to some computational problems, notably constraint satisfaction problems, that incrementally builds candidates to the solutions, and abandons each partial candidate \(c\), called backtracks, as soon as it determines that \(c\) cannot possibly be completed to a valid solution.
BSA enumerates a set of partial candidates that, in principle, could be completed in various ways to give all the possible solutions to the given problem. The completion is done incrementally, by a sequence of candidate extension steps. Conceptually, the partial candidates are represented as the nodes of a tree structure, the potential search tree. Each partial candidate is the parent of the candidates that differ from it by a single extension step. The leaves of the tree are the partial candidates that cannot be extended further.
BSA traverses this search tree recursively, from the root down, in depth-first order. At each node \(c\), the algorithm checks whether c can be completed to a valid solution. If it cannot, the whole sub-tree rooted at c is skipped. Otherwise, the algorithm checks whether \(c\) itself is a valid solution, and if so reports it to the user and recursively enumerates all sub-trees of \(c\). The two tests and the children of each node are defined by user-given procedures. Therefore, the actual search tree that is traversed by the algorithm is only a part of the potential tree. The total cost of the algorithm is the number of nodes of the actual tree times the cost of obtaining and processing each node. This fact should be considered when choosing the potential search tree and implementing the pruning test.
3.3.7 Differential Search algorithm
Differential Search algorithm (DSA) (Civicioglu 2012) is an effective evolutionary algorithm for solving real-valued numerical optimization problems. DSA was inspired by migration of superorganisms utilizing the concept of stable-motion.
In DSA, the search space is simulated as the food areas and each point in the search space corresponds to an artificial-superorganism migration. The goal of this migration is to find the global optimal solution of the problem. During this process, the artificial-superorganism checks which randomly selected positions can be retained temporarily. If such a tested position is suitable to be retained for some time, the artificial-superorganism uses this migration model to settle at the discovered position and then continues its migration from this position on.
3.3.8 Particle Swarm Optimization algorithm
Particle Swarm Optimization (PSO) (Poli 2008; Clerc 2006) is a computational method that optimizes a problem by iteratively trying to improve a candidate solution with regard to a given measure of quality. PSO optimizes a problem by having a population of candidate solutions, here dubbed particles, and moving these particles around in the search space according to simple mathematical formulae over the particle’s position and velocity. Each particle’s movement is influenced by its local best known position but, is also guided toward the best known positions in the search space, which are updated as better positions are found by other particles. This is expected to move the swarm toward the best solutions.
Basic variant of the PSO algorithm works by having a population (swarm) of candidate solutions (particles). These particles are moved around in the search space according to a few simple formulae. The movements of the particles are guided by their own best known position in the search space as well as the entire swarm’s best known position. When improved positions are being discovered, these will then come to guide the movements of the swarm. The process is repeated and by doing so it is hoped, but not guaranteed, that a satisfactory solution will eventually be discovered.
4 Results
Localization and modeling methods introduced in the previous sections were tested on real astronomical data. For system analysis presentation were used data acquired by MAIA (Meteor Automatic Imager and Analyzer System) (Koten et al. 2011). Detection and modeling methods were then applied to chosen parts of astronomical image presented in Fig. 1. This image was acquired on 8 August 2004 by BOOTES 2 astronomical imaging system, where SBIG ST-9 (LPT) astronomical camera was used. This camera was equipped with Meade optics (focal length 30 cm, lens speed 10). These data are not adjusted by the dark frame subtraction (Buil 1991).
Table 1 contains dark frames’ analysis results. The dark frames were acquired by the MAIA system for different values of CCD sensor temperatures. From mentioned table, it is possible to say that the imaging system has Poisson distribution with \(\lambda _1 =\cdots =\lambda _n\) for all CCD sensor temperatures. Thus the CCD sensor has same properties in each pixel at given sensor temperatures.
Figure 2 presents graphical results of FDR for one selected CCD sensor temperature. From Fig. 2, it can be also seen that no \(p\) value is under the critical \(p\) values line evaluated by Sidak’s correction. Thus the FDR is really equal to zero.
Figure 3 presents analyzed light and dark images and Fig. 4 shows results of object detection algorithm using FDR evaluation.
In Fig. 4a, there may be seen sorted \(p\) values that occur under the critical \(p\) values curve. These are statistically significant and indicate objects occurrence. Figure 4 then provides binary image with detected objects regions. Significance level used to FDR evaluation was \(\alpha = 0.05\).
For purpose of PSF modeling were astronomical objects classified into three classes based on the bit depth of analyzed image. Processed data were acquired in the 16 bit depth, thus the maximum intensity is 65,535. Intervals of intensity values were uniformly divided into three classes, which can be written as follows:
-
small object—maximum intensity in the analyzed area is less than 21,845,
-
medium object—maximum intensity in the analyzed area is higher than 21,845 and less than 43,690,
-
large object—maximum intensity in the analyzed area exceeds 43,690 and the top is given by the system resolution properties, thus 65,535.
Chosen objects that were used for an application of proposed methods can be seen in Fig. 5.
Modeling results presented in this section are summarized in Tables 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14 and 15. These tables contain minimum and maximum values of \(\phi \), its average and standard deviation. Except for the objective function values, these tables also show estimates of model parameters, presented in Table 2, for the best value of \(\phi \). Results vector of estimated model parameters can be written as:
where \(par_i\) are given by the Table 2. Table 3 contains lower (\(LB\)) and upper bounds (\(UB\)) of model parameters.
For each optimization method, there were performed 50 cycles of the objective function \(\phi \) minimization. All optimization methods were set to 50,000 evaluations of the objective function in each cycle. Lower and upper bounds of optimized models were estimated empirically.
In Tables 4, 5, 6, 7, there are summarized results of small object models optimization, namely Gauss model in Table 4, Moffat in Table 5 and combined model of interference and either turbulence or focusing in Tables 6 and 7, respectively.
From Tables 4, 5, 6, 7, it is obvious that the best value of objective function \(\phi \) was reached by the application of interference and focusing combination, where this result was achieved by harmonic search optimization method. When the particular optimization methods are compared, then there are not as big differences, when the results for Gauss and Moffat models are compared. All the methods gave almost similar minimum values of \(\phi \), but when we look at standard deviation of \(\phi \), then the most stable method is HS algorithm. Other genetic algorithms have again comparable \(\phi \) standard deviation, but the fmincon was the worst. In the case of Gauss and Moffat model, the fmincon function gives comparable results to genetic algorithms in the solution of the given problem. From Table 7 is visible that the fmincon function is not suitable for more complicated models such as used combination of simple functions. The best value of the objective function was again reached by HS followed by the SA, but also CS gave good result with lower standard deviation then the two previously mentioned algorithms.
Tables 8, 9, 10, 11 show results of medium object localization. It is again obvious that the best results with respect to \(\phi _{\mathrm {min}}\) were reached by HS algorithm followed by the SA, CS and MCS methods. The Standard deviations of \(\phi \) were again lowest in case of HS algorithm for Gauss and Moffat models. For the third used model, there the lowest value of \(\phi _{\mathrm {std}}\) was achieved with CS algorithm with respect to the value of \(\phi _{\mathrm {min}}\) but HS gives also satisfactory results.
Last four tables, Tables 12, 13, 14, 15 summarize results of localization for large astronomical object. In the case of the object used in this article whereof parameters were estimated, there the best results with respect to \(\phi _{\mathrm {min}}\) and \(\phi _{\mathrm {std}}\) were reached by the HS methods for all three models.
When minimum values of \(\phi \) are compared then it can be said that the fmincon function gives comparable results as optimization heuristics for all three types of objects when the Gauss and Moffat models are applied. On the other hand, as it was previously mentioned, it does not give satisfactory results for the third used model.
With regard to the standard deviations then it can be said that the fmincon function is not suitable from the point of view of the solution repeatability. When the results from Tables 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14 and 15 are compared then it can be written that the best method is HS algorithm. It always gives the lowest estimate of \(\phi \) but from stability point of view gives best results CS when \(\phi _{\mathrm {std}}\) values are compared.
5 Conclusion
In this paper, there were described algorithms used for analysis of astronomical images, especially for detection of objects and their modeling. There was also explained how authors derived objective function from their hypothesis using Poisson distributed data.
There was mentioned method of objects detection based on the presumption that the analyzed data are Poisson distributed. This presumption was verified on dark images acquired by MAIA astronomical system, Table 1, and led to the inference of objects detection algorithm. Presented algorithm was derived using relationship between Poisson and NB distribution, which cumulative distribution function allows us comparison of information present in the light and dark images. In combination with FDR evaluation using Sidak’s correction was presented new object detection algorithm, Fig. 4.
The hypothesis of Poisson distributed data and consideration of image model allowed us to derive objective function, which can be used with different object models that has been optimized. There were described two commonly used object models, i.e., Gauss and Moffat and there was also introduced more complicated model, which suppose combination of interference and either interference or focusing phenomena. For the purpose of the objective function optimization, there were used different approaches. The first one was application of the Matlab fmincon function which is based on classic optimization algorithms. The second approach was application of optimization heuristics like cuckoo search, harmony search, etc.
In this article, there were analyzed three different cuts of used astronomical image, where each cut contains one astronomical object represented by PSF with different maximum intensity. These PSFs were modeled using previously mentioned approaches and algorithms. From the results, Table 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14 and 15 it is obvious that the combined model of interference and focusing is better for fitting of astronomical objects than the two simpler models and the second combined model, i.e., interference plus turbulence. When the optimization methods are compared, then it can be said that the fmincon function is suitable for the first two simple models, but it is not acceptable for the third used model. It is also not acceptable from the point of view of the \(\phi _{\mathrm {std}}\) values, where the fmincon functions have higher values than the optimization heuristics for all used object models. When the \(\phi _{\mathrm {min}}\) and \(\phi _{\mathrm {std}}\) values are compared then as the best heuristic can be marked HS algorithm, but from results, stability point of view gives the best result CS algorithm.
Results and approaches presented in this article are supposed to find their use in our future work, which will be focused on further research of astronomical images where we would like analyze if these images can be distributed by the different law than the Poisson one. Results will also find its use in further research focused on pair, triplet and quadruplet interactions of astronomical objects and their deconvolution.
References
Andrews LC, Shivamoggi BK (1999) Integral transforms for engineers. SPIE Press, Washington
Birney DS, Gonzalez G, Oesper D (2006) Observational astronomy. Cambridge University Press, Cambridge
Brown LD, Zhao LH (2001) A new test for the Poisson distribution. http://www-stat.wharton.upenn.edu/lzhao/papers/newtest. Accessed 04 Sep 2013
Budding E, Demircan O (2007) Introduction to astronomical photometry. Cambridge University Press, New York
Buil C (1991) CCD astronomy: construction and use of an astronomical CCD camera. Willmann-Bell Inc, Richmond
Civicioglu P (2012) Transforming geocentric cartesian coordinates to geodetic coordinates by using differential search algorithm. Comput Geosci 46:229–247
Clerc M (2006) Particle swarm optimization. ISTE Ltd, London
Efron B (2010) Large-scale inference: empirical Bayes methods for estimation, testing, and prediction. Cambridge University Press, Cambridge
Fischer RE, Tagic-Galeb B, Yoder PR (2008) Opstical system design. McGraw-Hill, New York
Gandomi AH, Yang XS, Alavi AH (2013) Cuckoo search algorithm: a metaheuristic approach to solve structural optimization problems. Eng Comput 29(1):17–35
Geem ZW (2009) Music-inspired harmony search algorithm: theory and applications. Springer, Berlin
Geem ZW, Kim JH, Loganathan GV (2011) A new Heuristic optimization algorithm: harmony search. SIMULATION 76(2):60–68
Howell SB (2006) Handbook of CCD astronomy. Cambridge University Press, New York
Johnson NL, Kemp AW, Kotz S (2005) Univariate discrete distributions. Willey, New York
Karaboga D (2005) An idea based on honey Bee Swarm for numerical optimization. http://mf.erciyes.edu.tr/abc/pub/tr06_2005. Accessed 10 Sep 2014
Karaboga D, Bastruk B (2008) On the performance of artificial bee colony (ABC) algorithm. Appl Soft Comput 8(1):687–697
Kirkpatrick S, Gelatt CD Jr, Vecchi MP (1983) Optimization by simulated snnealing. Science 220(4598):671–680
Knuth ED (1968) The art of computer programming. Addison-Wesley, Boston
Koten P, Fliegel K, Vítek S, Páta P (2011) Automatic video system for continues monitoring of the meteor activity. Earth Moon Planets 108:69–76
Laarhoven PJM, Aarts EHL (1987) Simulated annealing: theory and applications. Springer, Berlin
McGregor JJ (1982) Backtrack search algorithms and the maximal common subgraph problem. Softw Pract Exp 12(1):23–34
McMinn JD (2006) Extension of a Kolmogorov atmospheric turbulence model for time-based simulation implementation, NASA-AIAA-97-3532
Moffat AFJ (1969) A theoretical investigation of focal stellar images in the photographic emulsion and application to photographic photometry. Astron Astrophys 3:455–461
Mojzis F, Kukal J, Svihlik J (2012) Astronomical systems analysis and object detection. In: Proceedings of 22nd international conference Radioelektronika, Brno (Czech Republic), pp 201–204
Mojzis F, Kukal J, Svihlik J (2012) Diffusion model with Poisson noise and its identification. In: Proceedings of 18th international conference on soft computing mendel 2012, Brno (Czech Republic), pp 434–439
Papoulis A, Pillai US (2002) Probability, random variables and stochastic processes. McGraw-Hill, New York
Pawitan Y (2001) In all likelihood: statistical modelling and inference using likelihood. Oxford University Press, New York
Poli R (2008) Analysis of the publications on the applications of particle swarm optimisation. http://downloads.hindawi.com/archive/2008/685175. Accessed 10 Sep 2014
Price WL (1977) A controlled random search procedure for global optimization. Comput J 20(4):367–370
Rastrigin LA (1963) The convergence of the random search method in the extremal control of a many parameter system. Autom Remote Control 24(10):1337–1342
Severini TA (2001) Likelihood methods in statistics. Oxford University Press, New York
Sharma KK (2006) Optics: principals and applications. Academic Press Inc, London
Starck JL, Murtagh F (2006) Astronomical image and data analysis. Springer, Berlin
Sterken C, Manfroid J (1992) Astronomical photometry a guide. Springer, Dordrecht
Surhone LM, Timpledon MT, Marseken SF (2010) Variance-stabilizing transformation. VDM Publishing, Germany
Yang XS, Deb S (2009) Cuckoo search via Levy flights. In: Proceedings of world congress on nature & biologically inspired computing (NaBIC 2009). IEEE Publications, USA, pp 210–214
Acknowledgments
This work has been supported by the financial support from specific university research MSMT No 20/2013, research project MSM 6046137306 of the Ministry of Education, Youth and Sports of the Czech Republic and by the grant No. GA14-25251S Non-linear imaging systems with spatially variant point spread function of the Czech Science Foundation.
Author information
Authors and Affiliations
Corresponding author
Additional information
Communicated by V. Loia.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made.
The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.
To view a copy of this licence, visit https://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Mojžíš, F., Kukal, J. & Švihlík, J. Application of optimization heuristics for complex astronomical object model identification. Soft Comput 20, 621–636 (2016). https://doi.org/10.1007/s00500-014-1527-y
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00500-014-1527-y