Simulation of Image Fusion of Dual Modality (Electrical Capacitance and Optical Tomography) in Solid/Gas Flow

  • Rasif Mohd Zain
  • Ruzairi Abdul Rahim
  • Mohd Hafiz Fazalul Rahiman
  • Jaafar Abdullah
Original Paper

Abstract

This paper presents a novel method of combining dual modality electrical capacitance and optical tomography for applications in monitoring and investigating solid/gas flow. The objective of this method is to obtain a good quality image of the full-scale concentration distribution of solid/gas flow. A new image reconstruction algorithm fused the dual modality images is developed and evaluated.

Keywords

Electrical capacitance tomography Optical tomography Dual mode tomography Image fusion Reconstruction algorithm Solid/gas flow 

1 Introduction

In recent years, numerous types of tomographic sensors have been designed and evaluated for monitoring and investigating industrial solid/gas flow. This research was conducted in order to obtain high resolution cross-section images of solid/gas flow for industrial applications. However, though many studies have been conducted in this field, it remains a very difficult problem [1, 3].

Several tomographic sensing approaches are among the many techniques that have been tried for producing images of solid/gas flow. These tomographic techniques include electrical capacitance, gamma, and optical tomography (OT) which consist of single modality tomography. In previous studies, the utilization of only one tomography modality has proven incapable of producing high-resolution images throughout the full range of concentration distributions, and has therefore not been adequate for exploring the important flow characteristics [3]. Currently, the new trend in sensor development of solid/gas flow is either through dual or triple modality in one sensor plane. These multimodal techniques overcome the constraints found when only one tomography modality is applied. The advantages and limitation of various sensing modalities have been described in numerous previous studies and are summarized in Table 1.
Table 1

The advantages and limitations of tomography modalities in solid/gas flow measurement

Sensing principle

Advantages

Limitations

Electrical capacitance

Low cost, no radiation, rapid response and robustness [1, 3], full concentration distribution [2], temporal resolution [3]

Blurred image in low concentration distribution below <30% [2, 5], image reconstruction complicated [1]

Optical

Spatial resolution 2% [2, 5], good measurement in medium distribution and content in whole investigate cross section simultaneously [6], safe and free of electromagnetic interference [6], low cost

Limited measurement in high distribution flows above 35% [2]

Gamma ray

Spatial resolution 1% [3, 4]

Require relatively long time for energy integration, mechanical movement to obtain of the whole region [4, 6], radiation dose to the surrounding [3, 4], heavy shielding [4]

Dual mode; electrical capacitance and gamma

High spatial and temporal resolution image [3, 4]

Radiation dose to the surrounding [3, 4], heavy shielding [4], high cost

Thus, considering the limitations of using only one sensing technique, we propose a new method of combining an electrical capacitance and optical sensor in one sensor plane. The main purpose of this project is to obtain a high resolution image fusion over the full range of concentration distributions by means of a new image reconstruction algorithm.

This paper presents the method for producing image fusion of dual mode tomography (DMT) by using a new image reconstruction algorithm in solid/gas flow.

2 Theoretical Consideration

2.1 Overview of System Design

The proposed sensor for this study is based on hard-field and soft-field techniques. A hard-field system, such as optical tomography, is equally sensitive to the parameter it measures in all positions throughout the measurement volume [4]. Soft-field sensors, such as the capacitance tomograph, however, measure parameters that change depending on the position in the measurement volume. In a dual mode system that uses both soft and hard field sensors, the different sensors often provide complementary process information that can improve the measurement results [4].

To evaluate the setup, the optical and capacitance were placed approximately 20 mm from each other and on the same plane, so that we can assume that the flow characteristics are the same through the field of view of both sensors. The capacitance sensor consists of eight electrodes which were designed using finite element methods (FEM). The material selected for these electrodes is copper plate, due to its high conductivity. The optical sensor consists of 16 pairs of transmitters and receivers. The overall hardware system is shown in Fig. 1. The fan beam projection was the approach utilized to obtain higher spatial resolution as compared to the system with similar number of sensors in parallel projection [5].
Fig. 1

DMT system

2.2 Image Reconstruction for Optical Tomography

The sensor output model used for the optical sensor depends on the light beams that travel in straight lines towards the receivers. This output is dependent on the blockage effect where solid particles intercept the light beam. This approach is based on the following five assumptions.
  1. 1.

    Application is for medium concentration distributions in the range 0–35% [2, 5, 6].

     
  2. 2.

    The relationship between the number of solid particles passing through a beam and the corresponding sensor output voltage is linear [2, 5].

     
  3. 3.
    The emission of light intensity from each LED is uniform along all covered directions. In a single projection, it is assumed that each photodiode has been exposed to a beam from emitter i.e. a single projection resulted in 16 light beams from the emitter towards the photodiodes. Each light beam possesses a different width, depending on the sensor geometry and projection angle. The light beam width as show in Fig. 2.
    Fig. 2

    The light beam for optical sensor

     
  4. 4.

    The attenuation factor for gasses is assumed to be zero while the attenuation factor for solid particles is assumed to be one. All incident light beams on the surface of solid particles are fully absorbed by the solid particle.

     
  5. 5.

    Light scattering and beam divergence effects are neglected [2, 5, 6].

     

To generate the sensitivity matrices for an image resolution of 32 × 32 square pixels, a custom-created software algorithm using Visual C++ is used. The value of each element in the sensitivity map for the output of the optical sensor is represented by value ‘1’ if light passes and ‘0’ if no light passes through.

To generate the series of sensitivity matrices for different resolutions, a custom-created software using Visual Basic is used. In accordance with computed tomography standards, an image plane resolution of 512 by 512 pixels was selected for mapping the area of interest [5]. Through programming, the 128 nodes are mapped into Cartesian coordinate where the top-left corner coordinate is at (0,0) and the right-end corner is at (512,512) as shown in Fig. 3. The node, P0 (Tx0) is placed at coordinate (0,256) and the next node is placed at 2.8125° apart from P0 in clockwise direction. The coordinating technique is based on the following formula:
$$ P_{n} \cdot x = r - \cos (n \times \theta ) \times r $$
$$ P_{n} \cdot y = r + \sin ( - n \times \theta ) \times r $$
where Pn · x =  the x-position of nth node, Pn · y = the y-position of nth node, r = the radius of circle which is equal to 256, θ = the angle between each node from centre of circle [point (256,256)], which is equal to 2.8125°.
Fig. 3

Process of obtaining the sensitivity matrix for the view from Tx15 to Rx5

After the coordinates for each node are obtained, a 512 × 512 pixels rectangular picture box is employed to create the image plane in programming. The picture box is divided into n × n small rectangle, where n is the desired dimension of sensitivity map (or resolution of image). The total pixels in small rectangular, Amax can be determined by using the following equation
$$ A_{\max } = \left( {\frac{512}{n}} \right)^{2} $$
where n is the decided resolution. To obtain the sensitivity map of a specific view in a projection, the shape of the light beam (hexagon with six nodes) is drawn into the picture box based on the orientation of corresponding nodes of emitter and receiver. Through the use of the retrieved computer graphic memory function provided by Win32 API function call library, the value of each pixel in the picture can be investigated after the drawing. By counting the number of the changed pixels’ value within the rectangle-xy (where the x and y specify the position of the rectangle in the picture box), the number of light occupied pixels, Axy in the rectangle-xy is obtained. The Axy provided a relative measurement regarding how many pixels in a small rectangle are occupied by a specific light beam. However, the measurement did not provide the net contribution percentage of a specific light beam in a specific rectangle. This is due to the fact that the distributions of light beams in all rectangles are non uniform. Therefore, the normalized sensitivity map has to be determine based on the contribution of each view in a specific rectangle. The algorithm below was used in programming to obtain the sensitivity maps.
$$ M_{Tx,Rx} \left( {x,y} \right) = \sum\limits_{n = 0}^{k} {\sum\limits_{m = 0}^{k} {B_{x,y} \left( {n,m} \right)} } \quad \left\{ {\begin{array}{*{20}c} {B_{x,y} \left( {n,m} \right) = 0\,{\text{white(unchange)}}} \\ {B_{x,y} \left( {n,m} \right) = 1\,{\text{black(changed)}}} \\ \end{array} } \right. $$
$$ N\left( {x,y} \right) = \sum\limits_{Tx = 0}^{15} {\sum\limits_{Rx = 0}^{15} {M_{Tx,Rx} \left( {x,y} \right)} } $$
$$\overline{M}_{Tx,Rx} \left( {x,y} \right) = \left\{\begin{array}{ll} {\frac{{M_{Tx,Rx} \left( {x,y} \right)}}{{N\left({x,y} \right)}}} & N_{x,y} > 0 \hfill \\ 0 & N_{x,y} = 0 \hfill \\ \end{array} \right. $$
where MTx,Rx(x, y) = sensitivity map for view of Tx to Rx before normalized, Bx,y(n, m) = the Boolean array with dimension k × k is used to represent the pixels in xy rectangle. It is obtained by retrieving the computer graphic memory after the drawing is completed. If a specific pixel-(n, m) is occupied by light, it will be represented by 1, otherwise, it will be set to equal to 0. N(x, y) = the sum of the same element-xy in 256 sensitivity maps before normalization, \( \overline{M}_{Tx,Rx} \left( {x,y} \right) \) = the normalized sensitivity matrices for view of Tx to Rx. m and n = nth row and mth column in rectangle. k = equal to \( \left( {\frac{512}{n} - 1} \right) \) where n is the desired resolution. Both x and y in the range from 0 to n − 1. Both Tx and Rx in the range from 0 to 15.

By using a counting loop in a computer program, the number of ‘black’ pixels in the array can be determined. To obtain the normalized sensitivity map, the equation above is used. Each element in the sensitivity map is divided by the number of total pixels occupied by 256 views in the same rectangle (element), Nx,y. For a sensitivity map with resolution 16 × 16, the value of Nx,y is shown in the denominator of the corresponding rectangle at the right of Fig. 3. For example, N6,4 has a value of 9,078. Hence the normalized value for element (6,4) of the Tx-15-Rx5 view is equal to 0.0833 (767/9,078) which means that the view of Tx15-Rx5 contributes to 8.33% of the total area in the rectangle (6,4).

In this sensor, the linear back projection (LBP) algorithm has been used to perform the image of reconstruction [5]. Combining the projection data from each sensor with its computed sensitivity maps generates the concentration profile. To reconstruct the image, each sensitivity matrix is multiplied by its corresponding sensor reading. The process can be expressed mathematically as below
$$ V_{\text{LBP}} (x,y) = \mathop \sum \limits_{{T_{x} = 0}}^{15} \mathop \sum \limits_{{R_{x} = 0}}^{15} S_{{R_{x} T_{x} }} \overline{M}_{{T_{x} ,R_{x} (x,y)}} $$
where the voltage distribution, VLBP(x, y), is obtained using the LBP algorithm, SRxTx, is the signal loss amplitude of the receiver Rxth for projection Txth in unit volt; \( \overline{M}_{{T_{x} ,R_{x} (x,y)}} \), is the normalized sensitivity matrix for the view of Txth [5].

2.3 Image Reconstruction for Electrical Capacitance Tomography

The electrical capacitance sensors measure the change in capacitance in order to reconstruct the cross-sectional distribution of permittivity inside the pipe [1, 3, 4, 7]. The sensor output from the electrical capacitance sensor is linearly proportional to the distribution of solid particles inside the cross-section of pipe, and covers the full range of distributions, (0–100%). Calculation of the sensitivity matrix on a grid of 32 × 32 square pixels for electrical capacitance tomography (ECT) provides the same resolution as the optical tomography.

The most popular image reconstruction technique for electrical capacitance sensors is the linear back projection (LBP). In this method an image is obtained by superimposing a set of predetermined sensitivity maps, using the measured data as weighting factors.

2.4 Image Reconstruction for Image Fusion

One of the techniques used to perform image fusion is the use of wavelet transforms. The block diagram of a generic wavelet-based image fusion scheme is shown in Fig. 4.
Fig. 4

Block Diagram of a generic wavelet

Wavelet transform is first performed on each source images, then a fusion decision map is generated based on a set of fusion rules. The fused wavelet coefficient map can be constructed from the wavelet coefficients of the source images according to the fusion decision map. Finally, the fused image is obtained by performing the inverse wavelet transform.

From the below diagram, we can see that the fusion rules play a very important role during the fusion process. The fusion rules used in this research are illustrated in Fig. 5.
Fig. 5

Fusion process

When constructing each wavelet coefficient for the fused image, we need to determine which source image better describes this coefficient. This information will be kept in the fusion decision map. The fusion decision map has the same size as the original image. Each value is the index of the source image which may be more informative on the corresponding wavelet coefficient. Thus, the user will actually make a decision on each coefficient. In order to make the decision on one of the coefficients of the fused image, one way is to consider the corresponding coefficients in the source images as illustrated by the red pixels. This is called pixel-based fusion rule. Another way is to consider not only the corresponding coefficients, but also their close neighbors, say a 3 × 3 or 5 × 5 windows, as illustrated by the blue and shadowing pixels. This is called window-based fusion rules. This method considered the fact that there usually is a high correlation among neighboring pixels.

In producing the image fusion from the raw data of both sensors, a procedure is needed. The assumptions are as follow:
  1. 1.

    Both of the sensitivity maps of ECT and OT sensors are equal in grid square resolution. i.e. 32 × 32.

     
  2. 2.

    The Ci(∆T) defines the temporal material concentration for the ith pixel of a pipe cross section at a time interval ∆T. Let piO(∆t) define the corresponding pixel value of the reconstructed image obtained from data captured by optical sensor and piC(∆t) define the pixel valued derived from the electrical capacitance sensor. Assume that both sensors have the same temporal resolution, Ci(∆T) = piO(∆t) = piC(∆t).

     
  3. 3.
    Image fusion provide full range distribution as shown in Fig. 6. The threshold level distribution (L) for capacitance tomography, L = 35%. The value of L is based on the maximum distribution of the optical tomography data.
    Fig. 6

    Distribution of DMT sensor

     
  4. 4.

    Image fusion results derived from the adjustment of the pixel values piC correspond to the detected regions (area). Where the concentration is lower piC ≤ L, piC is not taken into consideration of the PFith account and in turn the PFith = piO is applied. Meanwhile for piO ≥ L, the ith pixels follow the PFith = piC. Let Pi,jF represents the pixel values in final frame. The individual pixel Pi,jF from n final frame must meet the condition

     
$$ \sum\limits_{j = 1}^{n} {P_{ij}^{F} = \left\{ {\begin{array}{*{20}c} {\sum\limits_{j = 1}^{n} {p_{ij}^{C} \subseteq L = \sum\limits_{j = 1}^{n} {p_{ij}^{O} } } } \\ {\sum\limits_{j = 1}^{n} {p_{ij}^{O} \supseteq L = \sum\limits_{j = 1}^{n} {p_{ij}^{C} } } } \\ \end{array} } \right\}} $$
where n is the number of frames, i is the pixel number, and j is the frame number, PFith defines corresponding pixel in image fusion, piO is pixel value for optical tomography and piC is pixel value for capacitance tomography.

The reconstructed image from the electrical capacitance sensor represents capacitance distribution in a pipe cross-section and normalized concentration distribution c, 0 ≤ c ≤ 1 in the full range distribution of concentrations. The reconstructed image from the optical sensor represents a normalized intensity distribution, respectively, 0 ≤ O ≤ 1 below threshold (L) distribution.

The DMR algorithm is developed using Visual C++ programming language. The flow chart representing the steps involved in the algorithm is shown in Fig. 7.
Fig. 7

Flow chart for DMT algorithm

2.5 Image Analysis (Reconstructed Image Error Measurements)

The difference image between the original and reconstructed images gives the quantitative measurement and qualitative measurement that compare the quality of the reconstructed image with the original. The metric from error theory in measurement science that can be used to evaluate the discrepancy between the reconstructed and the standard/original image is peak signal to noise ratio or PSNR. A quick survey done by Meany [8] in the IEEE Transactions on Image Processing found out that nine out of ten papers surveyed used peak signal to noise ratio (PSNR) as their measure of quality/error. PSNR is calculated by the following formula (CFHD Quality Analysis [9]).
$$ PNSR = 10\log_{10} \left( {{\frac{{f(x,y)_{\max }^{2} }}{{{\frac{1}{{n^{2} }}}\sum\limits_{x = 0}^{n - 1} {\sum\limits_{y = 0}^{n - 1} {\left[ {f(x,y) - f_{A(x,y)} } \right]^{2} } } }}}} \right)\,{\text{dB}} $$
where f(x, y)max = the maximum value of the entire image, fA(x, y) = reconstructed image, n2 = image size.

In general, the higher value of PSNR, the higher the quality of an image. If two images are identical, the PSNR would be infinite. This is because it means that the ratio of Signal to Noise is higher. Here the ‘signal’ is the original image and the ‘noise’ is the error in reconstruction (Ting [10]). The actual value of the PSNR is not meaningful, but the comparison between two values for different reconstructed images provides a measure of quality. All the calculations for the error analyses were implemented using Visual Basic programming.

3 Results and Discussion

The experiment for static distribution flow model is based on time independent. The objects used in this study simulate different flow model such as the single flow object and complex objects flow models. For this simulation, a customized Graphical User Interface (GUI) was developed to represent an artificial 2-dimensional cross-sectional flow regime where the image object is drawn onto the 32 × 32 resolution bitmap (the same resolution as in real-time cross-sectional image). To verify the performance of DMT system, the image quality assessment or the error analysis quality has been carried out using different image reconstruction algorithms to the different flow model.

3.1 Single Object Flow Model

In order to test the performance of DMT system in high spatial resolution, a single object flow model was tested. The expected result for DMT was that it would be able to present and measure a small diameter object.

For a single object flow model, a small size object is selected. The diameter of the object is measured using an electronic digital caliper. The size of the object is converted to number of pixels according to the scale of the image plane in the forward modeling. The equation below presents the formula to convert from mm to pixel numbers.
$$ n_{\text{pixels}} = \left( {{\frac{{{\text{Total}}\,{\text{pixel}}\,{\text{sensitivity}}\,{\text{maps}}}}{{{\text{diameter}}\,{\text{of}}\,{\text{sensor}}}}}} \right) \times D_{\text{mm}} $$
where Total pixel sensistivity maps is 1,024, diameter = 100 mm, npixels = number of pixels, Dmm = measured object (mm).
The property of the object is tabulated in Table 2. The selected objects are spheres because they match the shape of the plastic beads that are used in industrial applications.
Table 2

Phantom properties

Single object

Shapes

Diameter (mm)

Object diameter (pixel)

Object area (pix2)

Wire-wrap

Round

0.45

2.88

26

The DMT tomograms for the modeled images are generated by predicting the sensor values during object disruption. The image analysis of PSNR errors between the offline reconstructed image and the original modeled image were analyzed. Table 3 shows the tomogram in true distribution. Besides the tomogram for true distribution of the object, the color bar represents the color scale for the density of tomogram; with ‘LOW’ representing low density and ‘HIGH’ representing high density of the image.
Table 3

Tomogram for single object

From the experiment results, the processed PSNR error data of ECT, OT and DMT in linear back projection algorithm are tabulated in Table 4. In the results shown, it can be interpreted that the DMT provides a good quality image compared to the OT and ECT. The DMT provides the best image quality where PSNR slightly higher by 10.82% compared to the ECT and 7.11% by OT. The image fusion of DMT provides good spatial resolution compared to a single modality. From this result also OT provides a good quality image compared to the ECT. Figure 5 shows the performance of image fusion compared to a single modality.
Table 4

PSNR result analysis of different sensor

Type of sensor

PSNR (dB)

ECT

43.89

DMT

49.22

OT

45.72

In order to get the best quality image for DMT the number of iterations should be selected based on the PSNR analysis. The PSNR error data of DMT image reconstruction with various iterations are shown in Table 5 and Fig. 8.
Table 5

PSNR result analysis for DMT

No. of iteration

PSNR (dB)

0 (LBP)

49.22

1

49.73

2

49.89

3

49.92

4

49.98

5

50.09

6

50.12

7

50.19

8

50.25

9

50.37

10

50.45

11

50.39

12

50.31

13

50.24

14

50.11

15

49.99

Fig. 8

a PSNR analysis of different sensor for single model. b PSNR analysis with different iteration

Single objects with a diameter of 0.45 mm are traced whereby zero iteration or the LBP image shows the lowest PSNR. As the iterative reconstruction algorithm is being implemented, the increment from the first iteration to the 10th iteration results a PSNR experience an increase to the maximum value. After the 10th iteration, the PSNR values occur to go downwards all the way to the 10th iteration. This is because the 10th iteration image computed had matched adequately to the input projection measurement in forward transform. The result is true with the statement made by [1] which says that after a certain number of iterations, the fidelity of the images start to deteriorate constantly. This was due to the fact that after the 10th iteration, there was a large difference between the projection measurement of the images produced and the actual measurement.

3.2 Complex Object Flow Model

The aim of investigating the complex objects flow model is to test the reliability of the DMT sensor in the middle zone where it was blocked by solid particles and where this model follows the turbulent flow model. For this purpose, all the objects used are identical to the object used in the single object flow model. All the objects have a diameter of 0.45 mm each and the reconstructed images were assessed using the PNSR error analysis. The tomograms which were obtained from the experiments are shown in Table 6.
Table 6

Tomogram for complex object

The calculations of PSNR for different sensing method are summarized in Table 7 below. The results show that the DMT provides good quality image compared to the OT and ECT. The DMT provides the better image where PSNR slightly higher with 5.92 % compared to the ECT and 32.89% with the OT. The analysis is shown in Fig. 9. The optical sensor has lower PSNR due to the inability of the optical sensor to penetrate the middle part of complex object. The ECT produces a blurred image in the middle part of the object (Yan et al. [11]). However, the DMT system, based on the PSNR analysis, is reliable to use for turbulent flow model in real time.
Table 7

(a) PSNR error for different sensors (b) PSNR analysis for different iteration

 

PSNR (dB)

(a) Type of sensor

 ECT

38.12

 DMT

40.52

 OT

27.19

(b) No. of iteration

 0

40.52

 1

41.32

 2

41.42

 3

41.50

 4

41.57

 5

41.61

 6

41.67

 7

41.74

 8

41.78

 9

41.82

 10

41.86

 11

41.73

 12

41.44

 13

41.35

 14

41.22

 15

40.83

Fig. 9

a PSNR analysis with different sensor for complex object flow model. b PSNR analysis for complex object flow model

Simulation image analyses in various different flow models show that the DMT system is capable of producing good quality image compared to the single modality in the LBP reconstruction algorithm. However, this algorithm is still not adequate for producing a good quality image.

The calculations of PSNR for different sensing methods and iterations for DMT complex objects flow model are listed in Table 7 below.

The results show that the DMT provides good quality image compared to the OT and ECT. The DMT provides the better image resolution where the PSNR is slightly higher with 5.92 % compared to the ECT and 32.89% with the OT. The optical sensor has lower PSNR due to the inability of the optical sensor to penetrate the middle part of a complex object. The ECT produces a blurred image in the middle part of the object [11]. However, the DMT system, based on the PSNR analysis, is reliable to use for turbulent flow model in real time.

From the error assessment theory that has been discussed earlier, it is known that a high PSNR represents a better image quality. The analysis of the graphs shows that basically, the LBP algorithm does not contribute to a good image quality because the images reconstructed using LBP are often distorted [11] and blurred. The iterative reconstruction algorithm offers a solution for minimizing the blurring artifacts of the reconstructed image produced by the LBP algorithm. This advantage can be clearly observed for the single object, multiple objects flow model and complex object flow model. However, there is a maximum limit of the number of iterations that will improve the images (by observing the maximum point of the PSNR). This point is also referred to as the optimum level for performing image reconstruction. After this point, the PSNR continues to decrease, while increasing processing time.

4 Conclusions

A development of multimodal sensing and image fusion algorithm has been discussed. The simulation result from the dual sensor tomography shows that the sensor system can produce good quality images and resolution throughout the full range of concentration distributions compared to the single modality methods. Overall, the resolution of the sensors is considered good based on the fact that it can give a reliable measurement of solid objects as small as 0.45 mm diameter. In addition, the DMT is able to obtain the image of multiple objects without ambiguity. The iterative reconstruction algorithm has the potential to minimize the smearing around the image that is clearly present in the image generated using the LBP algorithm.

Notes

Acknowledgments

The author would like to thank the International Atomic Energy Agency (IAEA) for its sponsorship and support in completing the training course at Technical University of Lodz, Poland (MAL/03036). A special thanks and appreciation to Dr. Volodymyr Mosorov and Prof. Dr. Dominik Sankowski of Technical University of Lodz.

References

  1. 1.
    Yang, W. Q., & Liu, S. (2000). Role of tomography in gas/solid flow measurement. Flow Measurement and Instrumentation, 11, 237–244.CrossRefGoogle Scholar
  2. 2.
    Green, R. G., Abdul Rahim, R., Evans, K., Dickin, F. J., Naylor, B. D., & Pridmore, T. P. (1998). Concentration profiles in a gravity chute conveyor by optical tomography measurement. Powder Technology, 95, 49–54.CrossRefGoogle Scholar
  3. 3.
    Dyakowski, T., Johansen, G. A., Sankowski, D., Mosorov, V., & Wlodarczky, J. (2005). A dual modality tomography system for imaging gas/solids flow. In Proceeding 4th world congress on industrial process tomography.Google Scholar
  4. 4.
    Johansen, G. A., Froystein, T., Hjertaker, B. T., & Olsen, O. (1996). A dual sensor flow imaging tomographic system. Measurement Science and Technology, 7, 297–307.CrossRefGoogle Scholar
  5. 5.
    Abdul Rahim, R., Chan, K. S., Pang, J. F., & Leong, L. C. (2005). A hardware development for optical tomography system using switch mode fan beam projection. Sensors and Actuators A (Science Direct).Google Scholar
  6. 6.
    Yan, C., Liao, Y., Lai, S., Gong, J., & Zhoa, Y. (2002). A novel optical fibre process tomography structure for industrial process control, Institute of Physics Publishing. Measurement Science and Technology, 13, 1898–1902.CrossRefGoogle Scholar
  7. 7.
    Soleimani, M., Lionheart, W. R. B., Byars, M., & Pendleton, J. (2005). Nonlinear image reconstruction of electrical capacitance tomography (ECT) based on a validated forward problem. In Proceeding 4th world congress on industrial process tomography.Google Scholar
  8. 8.
    Meany, J. J. (2002). Measures of signal performance. ESE 578—digital representation of signals. St. Louis: Lecture Notes.Google Scholar
  9. 9.
    CFHD Quality Analysis. (2004). Quality analysis comparing CineForm’s Visually Perfect™ HD Codec versus native camera formats. USA: CineForm.Google Scholar
  10. 10.
    Ting, C. (1999). A study of spatial colour interpolation algorithms for single-detector digital cameras. Psych221/EE362 course project. Stanford University.Google Scholar
  11. 11.
    Yan, H., Liu, L. J., Xu, H., & Shao, F. Q. (2001). Image reconstruction in electrical capacitance tomography using multiple linear regression and regularization. Measurement Science and Technology, 12, 575–581.CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media, LLC 2010

Authors and Affiliations

  • Rasif Mohd Zain
    • 1
  • Ruzairi Abdul Rahim
    • 2
  • Mohd Hafiz Fazalul Rahiman
    • 3
  • Jaafar Abdullah
    • 1
  1. 1.Malaysian Nuclear AgencyBangiMalaysia
  2. 2.Faculty of Electrical EngineeringUniversiti Teknologi MalaysiaSkudaiMalaysia
  3. 3.School of Mechatronic EngineeringUniversiti Malaysia PerlisArauMalaysia

Personalised recommendations