# Simulation of Image Fusion of Dual Modality (Electrical Capacitance and Optical Tomography) in Solid/Gas Flow

- 135 Downloads
- 2 Citations

## Abstract

This paper presents a novel method of combining dual modality electrical capacitance and optical tomography for applications in monitoring and investigating solid/gas flow. The objective of this method is to obtain a good quality image of the full-scale concentration distribution of solid/gas flow. A new image reconstruction algorithm fused the dual modality images is developed and evaluated.

### Keywords

Electrical capacitance tomography Optical tomography Dual mode tomography Image fusion Reconstruction algorithm Solid/gas flow## 1 Introduction

In recent years, numerous types of tomographic sensors have been designed and evaluated for monitoring and investigating industrial solid/gas flow. This research was conducted in order to obtain high resolution cross-section images of solid/gas flow for industrial applications. However, though many studies have been conducted in this field, it remains a very difficult problem [1, 3].

The advantages and limitations of tomography modalities in solid/gas flow measurement

Sensing principle | Advantages | Limitations |
---|---|---|

Electrical capacitance | Low cost, no radiation, rapid response and robustness [1, 3], full concentration distribution [2], temporal resolution [3] | Blurred image in low concentration distribution below <30% [2, 5], image reconstruction complicated [1] |

Optical | Spatial resolution 2% [2, 5], good measurement in medium distribution and content in whole investigate cross section simultaneously [6], safe and free of electromagnetic interference [6], low cost | Limited measurement in high distribution flows above 35% [2] |

Gamma ray | Require relatively long time for energy integration, mechanical movement to obtain of the whole region [4, 6], radiation dose to the surrounding [3, 4], heavy shielding [4] | |

Dual mode; electrical capacitance and gamma | Radiation dose to the surrounding [3, 4], heavy shielding [4], high cost |

Thus, considering the limitations of using only one sensing technique, we propose a new method of combining an electrical capacitance and optical sensor in one sensor plane. The main purpose of this project is to obtain a high resolution image fusion over the full range of concentration distributions by means of a new image reconstruction algorithm.

This paper presents the method for producing image fusion of dual mode tomography (DMT) by using a new image reconstruction algorithm in solid/gas flow.

## 2 Theoretical Consideration

### 2.1 Overview of System Design

The proposed sensor for this study is based on hard-field and soft-field techniques. A hard-field system, such as optical tomography, is equally sensitive to the parameter it measures in all positions throughout the measurement volume [4]. Soft-field sensors, such as the capacitance tomograph, however, measure parameters that change depending on the position in the measurement volume. In a dual mode system that uses both soft and hard field sensors, the different sensors often provide complementary process information that can improve the measurement results [4].

### 2.2 Image Reconstruction for Optical Tomography

- 1.
- 2.
The relationship between the number of solid particles passing through a beam and the corresponding sensor output voltage is linear [2, 5].

- 3.The emission of light intensity from each LED is uniform along all covered directions. In a single projection, it is assumed that each photodiode has been exposed to a beam from emitter i.e. a single projection resulted in 16 light beams from the emitter towards the photodiodes. Each light beam possesses a different width, depending on the sensor geometry and projection angle. The light beam width as show in Fig. 2.
- 4.
The attenuation factor for gasses is assumed to be zero while the attenuation factor for solid particles is assumed to be one. All incident light beams on the surface of solid particles are fully absorbed by the solid particle.

- 5.

To generate the sensitivity matrices for an image resolution of 32 × 32 square pixels, a custom-created software algorithm using Visual C++ is used. The value of each element in the sensitivity map for the output of the optical sensor is represented by value ‘1’ if light passes and ‘0’ if no light passes through.

*P*

_{0}(Tx0) is placed at coordinate (0,256) and the next node is placed at 2.8125° apart from

*P*

_{0}in clockwise direction. The coordinating technique is based on the following formula:

*P*

_{n}·

*x*= the

*x*-position of

*n*th node,

*P*

_{n}·

*y*= the

*y*-position of

*n*th node,

*r*= the radius of circle which is equal to 256,

*θ*= the angle between each node from centre of circle [point (256,256)], which is equal to 2.8125°.

*n*×

*n*small rectangle, where

*n*is the desired dimension of sensitivity map (or resolution of image). The total pixels in small rectangular,

*A*

_{max}can be determined by using the following equation

*n*is the decided resolution. To obtain the sensitivity map of a specific view in a projection, the shape of the light beam (hexagon with six nodes) is drawn into the picture box based on the orientation of corresponding nodes of emitter and receiver. Through the use of the retrieved computer graphic memory function provided by Win32 API function call library, the value of each pixel in the picture can be investigated after the drawing. By counting the number of the changed pixels’ value within the rectangle-

*xy*(where the

*x*and

*y*specify the position of the rectangle in the picture box), the number of light occupied pixels,

*A*

_{xy}in the rectangle-

*xy*is obtained. The

*A*

_{xy}provided a relative measurement regarding how many pixels in a small rectangle are occupied by a specific light beam. However, the measurement did not provide the net contribution percentage of a specific light beam in a specific rectangle. This is due to the fact that the distributions of light beams in all rectangles are non uniform. Therefore, the normalized sensitivity map has to be determine based on the contribution of each view in a specific rectangle. The algorithm below was used in programming to obtain the sensitivity maps.

*M*

_{Tx,Rx}(

*x*,

*y*) = sensitivity map for view of

*Tx*to

*Rx*before normalized,

*B*

_{x,y}(

*n*,

*m*) = the Boolean array with dimension

*k*×

*k*is used to represent the pixels in

*xy*rectangle. It is obtained by retrieving the computer graphic memory after the drawing is completed. If a specific pixel-(

*n*,

*m*) is occupied by light, it will be represented by 1, otherwise, it will be set to equal to 0.

*N*(

*x*,

*y*) = the sum of the same element-

*xy*in 256 sensitivity maps before normalization, \( \overline{M}_{Tx,Rx} \left( {x,y} \right) \) = the normalized sensitivity matrices for view of

*Tx*to

*Rx*.

*m*and

*n*=

*n*th row and

*m*th column in rectangle.

*k*= equal to \( \left( {\frac{512}{n} - 1} \right) \) where

*n*is the desired resolution. Both

*x*and

*y*in the range from 0 to

*n*− 1. Both

*Tx*and

*Rx*in the range from 0 to 15.

By using a counting loop in a computer program, the number of ‘black’ pixels in the array can be determined. To obtain the normalized sensitivity map, the equation above is used. Each element in the sensitivity map is divided by the number of total pixels occupied by 256 views in the same rectangle (element), *N*_{x,y}. For a sensitivity map with resolution 16 × 16, the value of *N*_{x,y} is shown in the denominator of the corresponding rectangle at the right of Fig. 3. For example, *N*_{6,4} has a value of 9,078. Hence the normalized value for element (6,4) of the Tx-15-Rx5 view is equal to 0.0833 (767/9,078) which means that the view of Tx15-Rx5 contributes to 8.33% of the total area in the rectangle (6,4).

*V*

_{LBP}(

*x*,

*y*), is obtained using the LBP algorithm,

*S*

_{RxTx}, is the signal loss amplitude of the receiver

*R*

_{x}th for projection

*T*

_{x}th in unit volt; \( \overline{M}_{{T_{x} ,R_{x} (x,y)}} \), is the normalized sensitivity matrix for the view of

*T*

_{x}th [5].

### 2.3 Image Reconstruction for Electrical Capacitance Tomography

The electrical capacitance sensors measure the change in capacitance in order to reconstruct the cross-sectional distribution of permittivity inside the pipe [1, 3, 4, 7]. The sensor output from the electrical capacitance sensor is linearly proportional to the distribution of solid particles inside the cross-section of pipe, and covers the full range of distributions, (0–100%). Calculation of the sensitivity matrix on a grid of 32 × 32 square pixels for electrical capacitance tomography (ECT) provides the same resolution as the optical tomography.

The most popular image reconstruction technique for electrical capacitance sensors is the linear back projection (LBP). In this method an image is obtained by superimposing a set of predetermined sensitivity maps, using the measured data as weighting factors.

### 2.4 Image Reconstruction for Image Fusion

Wavelet transform is first performed on each source images, then a fusion decision map is generated based on a set of fusion rules. The fused wavelet coefficient map can be constructed from the wavelet coefficients of the source images according to the fusion decision map. Finally, the fused image is obtained by performing the inverse wavelet transform.

When constructing each wavelet coefficient for the fused image, we need to determine which source image better describes this coefficient. This information will be kept in the fusion decision map. The fusion decision map has the same size as the original image. Each value is the index of the source image which may be more informative on the corresponding wavelet coefficient. Thus, the user will actually make a decision on each coefficient. In order to make the decision on one of the coefficients of the fused image, one way is to consider the corresponding coefficients in the source images as illustrated by the red pixels. This is called pixel-based fusion rule. Another way is to consider not only the corresponding coefficients, but also their close neighbors, say a 3 × 3 or 5 × 5 windows, as illustrated by the blue and shadowing pixels. This is called window-based fusion rules. This method considered the fact that there usually is a high correlation among neighboring pixels.

- 1.
Both of the sensitivity maps of ECT and OT sensors are equal in grid square resolution. i.e. 32 × 32.

- 2.
The

*C*_{i}(∆*T*) defines the temporal material concentration for the*i*th pixel of a pipe cross section at a time interval ∆*T*. Let*p*_{i}^{O}(∆*t*) define the corresponding pixel value of the reconstructed image obtained from data captured by optical sensor and*p*_{i}^{C}(∆*t*) define the pixel valued derived from the electrical capacitance sensor. Assume that both sensors have the same temporal resolution,*C*_{i}(∆*T*) =*p*_{i}^{O}(∆*t*) =*p*_{i}^{C}(∆*t*). - 3.Image fusion provide full range distribution as shown in Fig. 6. The threshold level distribution (
*L*) for capacitance tomography,*L*= 35%. The value of*L*is based on the maximum distribution of the optical tomography data. - 4.
Image fusion results derived from the adjustment of the pixel values

*p*_{i}^{C}correspond to the detected regions (area). Where the concentration is lower*p*_{i}^{C}≤*L*,*p*_{i}^{C}is not taken into consideration of the*P*^{F}*i*th account and in turn the*P*^{F}*i*th =*p*_{i}^{O}is applied. Meanwhile for*p*_{i}^{O}≥*L*, the ith pixels follow the*P*^{F}*i*th =*p*_{i}^{C}. Let*P*_{i,j}^{F}represents the pixel values in final frame. The individual pixel*P*_{i,j}^{F}from*n*final frame must meet the condition

*n*is the number of frames,

*i*is the pixel number, and

*j*is the frame number,

*P*

^{F}

*i*th defines corresponding pixel in image fusion,

*p*

_{i}

^{O}is pixel value for optical tomography and

*p*

_{i}

^{C}is pixel value for capacitance tomography.

The reconstructed image from the electrical capacitance sensor represents capacitance distribution in a pipe cross-section and normalized concentration distribution * c*, 0 ≤

*≤ 1 in the full range distribution of concentrations. The reconstructed image from the optical sensor represents a normalized intensity distribution, respectively, 0 ≤*

**c***≤ 1 below threshold (*

**O***L*) distribution.

### 2.5 Image Analysis (Reconstructed Image Error Measurements)

*f*(

*x*,

*y*)

_{max}= the maximum value of the entire image,

*f*

_{A}(

*x*,

*y*) = reconstructed image,

*n*

^{2}= image size.

In general, the higher value of PSNR, the higher the quality of an image. If two images are identical, the PSNR would be infinite. This is because it means that the ratio of Signal to Noise is higher. Here the ‘signal’ is the original image and the ‘noise’ is the error in reconstruction (Ting [10]). The actual value of the PSNR is not meaningful, but the comparison between two values for different reconstructed images provides a measure of quality. All the calculations for the error analyses were implemented using Visual Basic programming.

## 3 Results and Discussion

The experiment for static distribution flow model is based on time independent. The objects used in this study simulate different flow model such as the single flow object and complex objects flow models. For this simulation, a customized Graphical User Interface (GUI) was developed to represent an artificial 2-dimensional cross-sectional flow regime where the image object is drawn onto the 32 × 32 resolution bitmap (the same resolution as in real-time cross-sectional image). To verify the performance of DMT system, the image quality assessment or the error analysis quality has been carried out using different image reconstruction algorithms to the different flow model.

### 3.1 Single Object Flow Model

In order to test the performance of DMT system in high spatial resolution, a single object flow model was tested. The expected result for DMT was that it would be able to present and measure a small diameter object.

*n*

_{pixels}= number of pixels,

*D*

_{mm}= measured object (mm).

Phantom properties

Single object | Shapes | Diameter (mm) | Object diameter (pixel) | Object area (pix |
---|---|---|---|---|

Wire-wrap | Round | 0.45 | 2.88 | 26 |

Tomogram for single object

PSNR result analysis of different sensor

Type of sensor | PSNR (dB) |
---|---|

ECT | 43.89 |

DMT | 49.22 |

OT | 45.72 |

PSNR result analysis for DMT

No. of iteration | PSNR (dB) |
---|---|

0 (LBP) | 49.22 |

1 | 49.73 |

2 | 49.89 |

3 | 49.92 |

4 | 49.98 |

5 | 50.09 |

6 | 50.12 |

7 | 50.19 |

8 | 50.25 |

9 | 50.37 |

10 | 50.45 |

11 | 50.39 |

12 | 50.31 |

13 | 50.24 |

14 | 50.11 |

15 | 49.99 |

Single objects with a diameter of 0.45 mm are traced whereby zero iteration or the LBP image shows the lowest PSNR. As the iterative reconstruction algorithm is being implemented, the increment from the first iteration to the 10th iteration results a PSNR experience an increase to the maximum value. After the 10th iteration, the PSNR values occur to go downwards all the way to the 10th iteration. This is because the 10th iteration image computed had matched adequately to the input projection measurement in forward transform. The result is true with the statement made by [1] which says that after a certain number of iterations, the fidelity of the images start to deteriorate constantly. This was due to the fact that after the 10th iteration, there was a large difference between the projection measurement of the images produced and the actual measurement.

### 3.2 Complex Object Flow Model

Tomogram for complex object

(a) PSNR error for different sensors (b) PSNR analysis for different iteration

PSNR (dB) | |
---|---|

(a) Type of sensor | |

ECT | 38.12 |

DMT | 40.52 |

OT | 27.19 |

(b) No. of iteration | |

0 | 40.52 |

1 | 41.32 |

2 | 41.42 |

3 | 41.50 |

4 | 41.57 |

5 | 41.61 |

6 | 41.67 |

7 | 41.74 |

8 | 41.78 |

9 | 41.82 |

10 | 41.86 |

11 | 41.73 |

12 | 41.44 |

13 | 41.35 |

14 | 41.22 |

15 | 40.83 |

Simulation image analyses in various different flow models show that the DMT system is capable of producing good quality image compared to the single modality in the LBP reconstruction algorithm. However, this algorithm is still not adequate for producing a good quality image.

The calculations of PSNR for different sensing methods and iterations for DMT complex objects flow model are listed in Table 7 below.

The results show that the DMT provides good quality image compared to the OT and ECT. The DMT provides the better image resolution where the PSNR is slightly higher with 5.92 % compared to the ECT and 32.89% with the OT. The optical sensor has lower PSNR due to the inability of the optical sensor to penetrate the middle part of a complex object. The ECT produces a blurred image in the middle part of the object [11]. However, the DMT system, based on the PSNR analysis, is reliable to use for turbulent flow model in real time.

From the error assessment theory that has been discussed earlier, it is known that a high PSNR represents a better image quality. The analysis of the graphs shows that basically, the LBP algorithm does not contribute to a good image quality because the images reconstructed using LBP are often distorted [11] and blurred. The iterative reconstruction algorithm offers a solution for minimizing the blurring artifacts of the reconstructed image produced by the LBP algorithm. This advantage can be clearly observed for the single object, multiple objects flow model and complex object flow model. However, there is a maximum limit of the number of iterations that will improve the images (by observing the maximum point of the PSNR). This point is also referred to as the optimum level for performing image reconstruction. After this point, the PSNR continues to decrease, while increasing processing time.

## 4 Conclusions

A development of multimodal sensing and image fusion algorithm has been discussed. The simulation result from the dual sensor tomography shows that the sensor system can produce good quality images and resolution throughout the full range of concentration distributions compared to the single modality methods. Overall, the resolution of the sensors is considered good based on the fact that it can give a reliable measurement of solid objects as small as 0.45 mm diameter. In addition, the DMT is able to obtain the image of multiple objects without ambiguity. The iterative reconstruction algorithm has the potential to minimize the smearing around the image that is clearly present in the image generated using the LBP algorithm.

## Notes

### Acknowledgments

The author would like to thank the International Atomic Energy Agency (IAEA) for its sponsorship and support in completing the training course at Technical University of Lodz, Poland (MAL/03036). A special thanks and appreciation to Dr. Volodymyr Mosorov and Prof. Dr. Dominik Sankowski of Technical University of Lodz.

### References

- 1.Yang, W. Q., & Liu, S. (2000). Role of tomography in gas/solid flow measurement.
*Flow Measurement and Instrumentation,**11*, 237–244.CrossRefGoogle Scholar - 2.Green, R. G., Abdul Rahim, R., Evans, K., Dickin, F. J., Naylor, B. D., & Pridmore, T. P. (1998). Concentration profiles in a gravity chute conveyor by optical tomography measurement.
*Powder Technology,**95*, 49–54.CrossRefGoogle Scholar - 3.Dyakowski, T., Johansen, G. A., Sankowski, D., Mosorov, V., & Wlodarczky, J. (2005). A dual modality tomography system for imaging gas/solids flow. In
*Proceeding 4th world congress on industrial process tomography*.Google Scholar - 4.Johansen, G. A., Froystein, T., Hjertaker, B. T., & Olsen, O. (1996). A dual sensor flow imaging tomographic system.
*Measurement Science and Technology,**7*, 297–307.CrossRefGoogle Scholar - 5.Abdul Rahim, R., Chan, K. S., Pang, J. F., & Leong, L. C. (2005). A hardware development for optical tomography system using switch mode fan beam projection.
*Sensors and Actuators A*(Science Direct).Google Scholar - 6.Yan, C., Liao, Y., Lai, S., Gong, J., & Zhoa, Y. (2002). A novel optical fibre process tomography structure for industrial process control, Institute of Physics Publishing.
*Measurement Science and Technology,**13*, 1898–1902.CrossRefGoogle Scholar - 7.Soleimani, M., Lionheart, W. R. B., Byars, M., & Pendleton, J. (2005). Nonlinear image reconstruction of electrical capacitance tomography (ECT) based on a validated forward problem. In
*Proceeding 4th world congress on industrial process tomography*.Google Scholar - 8.Meany, J. J. (2002).
*Measures of signal performance. ESE 578—digital representation of signals*. St. Louis: Lecture Notes.Google Scholar - 9.CFHD Quality Analysis. (2004).
*Quality analysis comparing CineForm’s Visually Perfect™ HD Codec versus native camera formats*. USA: CineForm.Google Scholar - 10.Ting, C. (1999).
*A study of spatial colour interpolation algorithms for single*-*detector digital cameras*. Psych221/EE362 course project. Stanford University.Google Scholar - 11.Yan, H., Liu, L. J., Xu, H., & Shao, F. Q. (2001). Image reconstruction in electrical capacitance tomography using multiple linear regression and regularization.
*Measurement Science and Technology,**12*, 575–581.CrossRefGoogle Scholar