Keywords

1 Introduction

Multispectral imaging (MSI) has been proven to be beneficial to a great variety of applications, but its use to general computer vision was limited due to complexity of imaging set-up, calibration and specific imaging pipelines. Spectral filter arrays (SFA) technology [9] seems to provide an adequate solution to overcome this limitation. In fact, increasing the number of spectral bands in filter arrays, along with using a high resolution sensor could lead to a small, efficient and affordable solution for single-shot MSI. In addition, SFA was developed around a very similar imaging pipeline than color filter arrays (CFA), which is rather well understood and already implemented in most solutions. In this sense, SFA provide a conceptual solution that could be exploitable in actual vision systems in a relatively straightforward manner.

We consider that the use of the SFA technology may reach a large scale of use soon. On one hand, SFA concept has been developed to a great extend using data from simulations, in particular on demosaicing [11, 12, 14, 26, 27, 29], but also on other aspects [8, 17, 21, 24, 28]. On the other hand, recent practical works on optical filters [1, 15, 16, 30] in parallel to the development of SFA camera prototypes in the visible electromagnetic range [5], in the near infra-red (NIR) [2] and in combined visible and NIR [7, 25], lead to the commercialization of solutions (e.g. IMEC [3], SILIOS [23], PIXELTEQ [18]). Furthermore, several cameras include custom filter arrays that are in-between CFA and SFA (e.g. Jia et al. [4] and Monno et al. [13]). Rest to validate the simulations with real data and adapt the imaging framework to state that this solution is ready to be used into practical applications. We help to address the first point by providing experimental data that can serve to the validation of simulation.

Through this article, we provide a freely accessible database of SFA images. The spectral calibration of the camera and the illuminant used during acquisition are provided along with several SFA raw images of various scenes. These data can be used as benchmarks for future works by the research community, and could lead to further development on SFA technology.

In the following sections, we first describe the camera design in term of spectral sensitivity, spatial arrangement and hardware. We then show the method to construct our SFA database, by presenting the experiment setup and the illuminant used. Finally, we draw a first benchmark to exploit the data; a visualization framework to display the multispectral data as a sRGB representation. To conclude, we outline the potential use of the proposed database in the research area and discuss future work.

Fig. 1.
figure 1

(a) Joint spectral characteristics of optical filters and sensor from the camera used to recover the database images [25]. (b) Spatial distribution of filters over the sensor, following Miao et al. [10] method. (c) Camera designed at Le2i laboratory, composed of a FPGA board and an attached sensor board holding the detector array.

2 Camera Design

From our previous work [9, 25], we designed and developed a proof-of-concept prototype SFA imaging system, that achieves multispectral snapshot capabilities. The camera setup is based on a commercial sensor coupled with a hybrid filter array for recovering visible and NIR information. The commercial sensor is from the e2v [22] manufacturer. The associated spectral filter array is manufactured by hybridization of the Silios Technologies [23]. The relative spectral sensitivities of the camera cover the electromagnetic spectrum from 380 nm to 1100 nm. Spectral characterization of the camera is fully described in the related paper [25]. The resulting characteristics of this vision system are shown in Fig. 1. From this work, we want to provide a useful set of data to go further in the practical investigation.

3 Database Description

We capture 18 scenes, composed of several categories of objects, ranging objects from: metal, biological, spatially homogeneous/heterogeneous, spectrally homogeneous/heterogeneous, showing specular reflections, translucent materials, containing industrial pigments, containing art pigments, clothes, etc. For the dataset, we fix a single exposure time, a single aperture and a single illuminant to limit multiple parameter dependence problems that could arise when analyzing multispectral images.

In practice, data is recovered from our camera through an Ethernet connection, linking the FPGA board (Zedboard, see Fig. 1(c)) and a PC. The FPGA holds a mezzanine card, which is an electronic interface towards the SFA sensor (the electronic design was initially developed by Lapray et al. [6]). Information concerning the hardware, like the optics, the electronics and the exposure times are given in Table 2. A simulated D65 source has been chosen to illuminate the scene (see Fig. 2). The object was small enough to be in a part where illumination was supposed to be sufficiently uniform, we will see later that illumination is yet far from flat.

Fig. 2.
figure 2

Measurement of D65 simulator emission spectra used for the acquisition of the database.

A pre-processing step is necessary before any use of the produced images. This processing is composed by a dark correction and a downsampling; it is described in Thomas et al. [25]. All the images provided with this document are pre-processed accordingly and ready to use. The mosaiced images of the database are shown on Fig. 3. The entire database can be freely downloaded at http://chic.u-bourgogne.fr. The zip file is organised according to Table 1.

Table 1. The files can be downloaded at http://chic.u-bourgogne.fr, the link SFA_LDR point out to a zip file which contains directories, one directory for each of the scenes.

4 Obtention of Color Images

Prior to perform any visualization, it is necessary to reconstruct the full resolution color image from the sampled spectral mosaiced data.

Since the information acquired using SFA method is intrinsically sparse over the full image resolution, we need a mean in order to reconstruct the full spatial information on the spectral image. Here, Miao et al. [10] demosaicing algorithm is employed. This method is naturally chosen to be the benchmark method because the spatial arrangement of our filters (see Fig. 1(a)) has been specially selected following this method. There are 8 channels in the camera design, thus 8 independent images are produced from one mosaiced image. An example of a demosaiced image is shown on Fig. 4. These images are stored in a multiband tiff file in the database. Note that we have not implemented any devignetting correction on the data. This could be seen on some images.

Fig. 3.
figure 3

Raw data after pre-processing applied described in [25]. We can clearly distinguish the SFA arrangement described in Fig. 1 when zooming in an image.

Table 2. Summary of the global parameters and the SFA camera characteristics used for the acquisition of the database.
Fig. 4.
figure 4

Example of demosaiced image from the database. The interpolation method is done by applying the Miao binary tree algorithm [10]. So the channels 1 to 8 are reconstructed to provide the full spatial resolution of images.

Fig. 5.
figure 5

Color image representation of reconstructed multispectral images after applying a linear interpolation algorithm that map the 8 channels to 3 color channels.

The color version of these images, in Fig. 5 is obtained by fitting a linear color transform from the 8 bands to CIEXYZ values, then to sRGB values. The linear model is based on the reflectance measurements of the Gretag Macbeth color checker in the visible and NIR shown in Fig. 6.

Fig. 6.
figure 6

Reflectance of the color checker patches used in our experiment. It is interesting to note that in the NIR, reflectance is mostly flat, but not the same for every color patch. (Thanks to Dr Yannick Benezeth and to the multispectral platform at Université de Bourgogne for measurement and facilities)

Table 3. Coefficients of the linear transform which converts the normalized camera data into colorimetric data.

The model is defined by M, that transforms color values C from the object into sensor values S, such as in Eq. 1.

$$\begin{aligned} C=M^+.S, \end{aligned}$$
(1)

where \(M^+\) is the generalized inverse of M (i.e. Moore-Penrose pseudo-inverse here) computed from the data obtained by integrating the Gretag Macbeth reflectance spectra and the illumination over the sensor sensitivities and over the standard 2 degrees CIE 1931 standard observer of the CIE according to the CIE recommendations. To this aim, all data are re-sampled at 10 nm by using linear interpolation, and the normalization factor k is computed according to \(\bar{y}(\lambda )\) and the normalized illumination of Fig. 2. The data are provided in Table 3.

The CIEXYZ values are then transformed into sRGB following the standard formulation and only an implicit gamut mapping, i.e. a clipping, is performed. Although for a three band sensor, Luther and Yves conditions may not be respected and a linear transform would probably not be sufficient, in our case of multispectral values, the colorimetric error is very small. Note, however, that even if we incorporated the NIR part in the color characterization, the sensitivity of the sensor in the NIR domain impacts the accuracy of the color reconstruction due to some amount of metamerism. Indeed, as it has been shown by Sadeghipoor et al. [19, 20], the NIR contribution to the signal is a source of noise for the color accuracy. Our database may also help to evaluate the adequate processing that must be used to correct these data.

5 Conclusion

We acquired a database of SFA images with a prototype sensor sensitive in the visible and NIR part of the electromagnetic field. A great variety of objects have been captured and the parameters of the acquisition have been measured, such as scene illumination. In addition, the colorimetric transform that permits to generate color images is provided. A benchmark demosaicing performed with the most established demosaicing method for SFA is also given. These data may serve for evaluation of state of the art demosaicing and color reconstruction methods as well as for further development and proof of concept in this field.