1 Introduction

There are many real-world circumstances where light monitoring systems are used to improve illuminance control, human health and comfort, industrial security or energy efficiency. In these cases, the most common approach is to deploy a network of sensors which can integrate, not only light information, but also some other magnitudes or pieces of multimedia information (sounds, images, pollution levels, temperature…). The communication of all these sensors are usually achieved through wireless connections creating what it is called a Wireless Multimedia Sensor Networks (WMSN) [1, 15].

Among the most common applications of these technologies is the light control in urban areas [12] [9], roads [22], smart buildings [14] or context-awareness [28]. Another very promising field, not yet exploited enough, is its use in theaters and filmmaking industry where light control is extremely important [21]. Finally, some other situations needing light measurement are ergonomics, safety, photography, cinematography, weather monitoring, theater set and interior design [17]. So, distributed illumination control is becoming an emerging research topic [3, 4, 19]. A survey on sensor-based smart lighting can be found in [26].

Obviously, a better light monitoring and control is obtained where a higher spatial resolution is employed, that is, where many light spots are measured. For this reason in all these applications the cost of the sensor is a very important concern, moreover when the size of the grid, and thus the number of sensors, are increased [6].

A good light sensing device should include two important elements: a homogeneous directional response, regardless of the direction in which light is received (flat directional sensitivity); and matching the spectral response of the human eye. Most light sensors do not meet these requirements and often require directional correction devices (usually overlapping lenses) and spectral compensation (optical filters and/or the combined use of different types of sensors). These correction mechanisms increase the price of each sensor.

In previous work [29] the authors have proposed a different approach based on the use of very simple sensors. The measurement of light obtained from them is later corrected both spectrally and directionally by numerous methods without the need of additional hardware in each sensor. This allows low-cost distributed systems with numerous points of light measurements in many applications. Similar solutions have been reported for the calibration of wearable light exposure devices [7, 11] where only one spot of light is needed and more sophisticated light sensors are used.

The directional correction system proposed in [29] assumes a known distribution of light sources. As it will be shown in this paper, in many cases it is possible to generalize this correction of the light measurement to situations with indeterminate light sources without significantly altering the measurement error. And from a network of these light measuring spots, a lighting map of a certain region (usually a surface) can be drawn.

On the other hand, some lighting control systems are concerned not only with determining the luminous level at a certain point but also with finding the relative position between the sensors and the light sources [5, 13, 23]. This paper will also show a method to estimate the lighting power of different existing light sources in a region or, equivalently, to draw a map of light sources. This is a very important topic in many topics like in augmented reality [10].

In this paper, Section 2 presents the methodology used to directionally correct the illuminance measurements in four circumstances: unidirectional lighting (subsection 2.1), omnidirectional 2D lighting (subsection 2.2), omnidirectional 3D lighting (subsection 2.3), and general lighting conditions (subsection 2.4). Subsection 2.5 addresses the effect of sensor tilting on measurements while in subsection 2.6 the problem of determining a map of light sources is tackled.

This methodology has been applied to an experimental prototype. Section 3 presents the results obtained in three sets of tests: directional correction for single light measurement (subsection 3.1), drawing a map of illuminance (subsection 3.2), and obtaining a map of light sources (subsection 3.3). The conclusions of this study will eventually be addressed in section 4.

2 Materials and methods

Consider a sensor network consisting on a set of low cost photodiodes. Each photodiode provides an electric current I p that is proportional to the received luminous intensity according to the expression [16]

$$ {I}_P={K}_v{E}_v. $$
(1)

where the term E v denotes the illuminance, i.e. the visible light power received per unit area of the diode. The proportionality constant K v can be experimentally obtained and is usually provided by the manufacturer in its datasheets for certain measurement conditions, usually considering front illumination. In those cases where the direction of illumination does not meet this characteristic, the relationship between electric current and illuminance must be corrected by a factor ψ that takes into account the direction of the light received by the sensor. The electric current in the photodiode now becomes

$$ {I}_P={K}_v\psi {E}_v, $$
(2)

In this section the directional correction factors in several lighting conditions are derived.

2.1 Unidirectional lighting

First of all sensor receiving luminous radiation from a single direction will be considered. The light source luminous emittance will be denoted as M v (lm/m2) and corresponds to the visible luminous power emitted for a unit surface. The visible luminous flux received by the sensor will be denoted Φ v (lm). For a rectangular sensor receiving a homogeneous unidirectional front light (Fig. 1) the relationship between both magnitudes is

Fig. 1
figure 1

Unidirectional lighting conditions

$$ {\Phi}_{v0}={M}_v{A}_0={M}_v ab. $$
(3)

When the sensor is receiving a homogeneous unidirectional light but oriented in an angle φ (Fig. 2) then the luminous flux becomes

$$ {\Phi}_{v\varphi }={M}_v{A}_{\varphi }, $$
(4)

where

$$ {A}_{\varphi }=ac= ab\ cos\varphi ={A}_0 cos\varphi \kern0.0em , $$
(5)

and therefore

$$ {\Phi}_{v\varphi }={M}_v ab\ cos\varphi ={\Phi}_{v0}\ cos\varphi . $$
(6)
Fig. 2
figure 2

Tilted lighting

In this case

$$ {\Phi}_{v\varphi }={S}_{\varphi }{\Phi}_{v0}. $$
(7)

The following expression is called directional sensitivity of a sensor

$$ {S}_{\varphi }=S\left(\varphi \right)\equiv \frac{\Phi_{v\varphi }}{\Phi_{v0}}. $$
(8)

This is also the directional correction factor ψ in eq. (2). For tilted lighting it can obtained from (7)

$$ \psi ={S}_{\varphi }= \cos \upvarphi . $$
(9)

2.2 Omnidirectional 2D lighting

A sensor receiving a homogeneous 2D omnidirectional light will now be considered (Fig. 3).

Fig. 3
figure 3

2D omnidirectional lighting

The luminous emittance corresponding to angles between φ and φ +  will be

$$ {dM}_v={J}_vd\varphi, $$
(10)

where J v is the angular emittance, i.e. the luminous emittance per unit of angle (in 2D). The total emittance received by the sensor is

$$ {M}_v={\int}_{-\frac{\pi }{2}}^{+\frac{\pi }{2}}{dM}_v={\int}_{-\frac{\pi }{2}}^{+\frac{\pi }{2}}{J}_vd\varphi ={J}_v{\int}_{-\frac{\pi }{2}}^{+\frac{\pi }{2}}d\varphi ={J}_v{\left[\varphi \right]}_{-\frac{\pi }{2}}^{+\frac{\pi }{2}}={J}_v\left[\frac{\pi }{2}-\left(-\frac{\pi }{2}\right)\right]=\pi {J}_v, $$
(11)

and therefore

$$ {J}_v=\frac{M_v}{\pi }. $$
(12)

The visible luminous flux received by the sensor due to the radiation coming from angles between φ and φ +  will be

$$ d{\Phi}_{v\ast p}={S}_{\varphi\ }d{\Phi}_{v0}={S}_{\varphi\ }{A}_0{dM}_v. $$
(13)
$$ d{\Phi}_{v\ast p}={A}_0{J}_v{S}_{\varphi }d\varphi . $$
(14)
$$ d{\Phi}_{v\ast p}=\frac{M_v}{\pi }{A}_0{S}_{\varphi }d\varphi . $$
(15)

The total visible luminous flux received by the sensor is

$$ {\Phi}_{v\ast p}={\int}_{-\frac{\pi }{2}}^{+\frac{\pi }{2}}d{\Phi}_{v\ast p}={\int}_{-\frac{\pi }{2}}^{+\frac{\pi }{2}}\frac{M_v}{\pi }{A}_0{S}_{\varphi }d\varphi =\frac{M_v}{\pi }{A}_0{\int}_{-\frac{\pi }{2}}^{+\frac{\pi }{2}}{S}_{\varphi }d\varphi . $$
(16)

Assuming that S φ is a symmetrical function, which is usually the case, then

$$ {\Phi}_{v\ast p}=\frac{M_v}{\pi }{A}_02\underset{0}{\overset{+\frac{\pi }{2}}{\int }}{S}_{\varphi }d\varphi . $$
(17)

Calling integral sensitivity to the expression

$$ {S}_i\equiv {\int}_0^{+\frac{\pi }{2}}{S}_{\varphi }d\varphi, $$
(18)

then

$$ {\Phi}_{v\ast p}=\frac{M_v}{\pi }{A}_02{S}_i=\frac{2{S}_i}{\pi }{\Phi}_{v0}. $$
(19)

For omnidirectional 2D lighting, the directional correction factor is defined by

$$ {\psi}_{\ast p}\equiv \frac{\Phi_{v\ast p}}{\Phi_{v0}}, $$
(20)

and its value is

$$ {\psi}_{\ast p}\equiv \frac{\Phi_{v\ast p}}{\Phi_{v0}}=\frac{\frac{2{S}_i}{\pi }{\varPhi}_{v0}}{\Phi_{v0}}=\frac{2{S}_i}{\pi }. $$
(21)

In a flat sensor the directional sensitivity is given by

$$ {S}_{\varphi }= \cos \upvarphi, $$
(22)

thus the integral sensitivity is

$$ {S}_i\equiv {\int}_0^{+\frac{\pi }{2}}{S}_{\varphi }d\varphi ={\int}_0^{+\frac{\pi }{2}} cos\varphi\ d\varphi ={\left[ sen\varphi \right]}_0^{+\frac{\pi }{2}}=\left[1-0\right]=1, $$
(23)

and the directional correction factor is

$$ {\psi}_{\ast p}=\frac{2{S}_i}{\pi }=\frac{2}{\pi}\kern0.0em . $$
(24)

For light sensing it is quite common that the sensors include some type of directional correction, usually by integrating a proper optic. In this case (Fig. 4), a flat (uniform) directional sensitivity is achieved

$$ {S}_{\varphi }=1. $$
(25)
Fig. 4
figure 4

Optical compensation of tilted lighting

For these sensors the integral sensitivity is

$$ {S}_i\equiv {\int}_0^{+\frac{\pi }{2}}{S}_{\varphi }d\varphi ={\int}_0^{+\frac{\pi }{2}}d\varphi ={\left[\varphi \right]}_0^{+\frac{\pi }{2}}=\left[\frac{\pi }{2}-0\right]=\frac{\pi }{2}, $$
(26)

thus the directional correction factor is

$$ {\psi}_{\ast p}=\frac{2{S}_i}{\pi }=\frac{2\frac{\pi }{2}}{\pi }=1. $$
(27)

So, summarizing, for omnidirectional 2D lighting the directional correction factor is 2/π in non- compensated sensors, and is 1 in optically compensated sensors.

2.3 Omnidirectional 3D lighting

A sensor receiving a homogeneous 3D omnidirectional light will be considered in this subsection. The light direction is referenced using the elevation (ε) and azimuth (α) angles where

$$ \varepsilon =\frac{\pi }{2}-\varphi . $$
(28)

The luminous flux received by the sensor corresponding to the light source with elevation angles between ε and ε + , and azimuth angles between α and α +  is

$$ {dM}_v={H}_v d\alpha d\varepsilon, $$
(29)

where H v stands for the density of emittance, a constant value in the homogeneous case. The total emittance received by the sensor is

$$ {M}_v={\int}_0^{+\frac{\pi }{2}}{\int}_{-\pi}^{+\pi }{dM}_v={\int}_0^{+\frac{\pi }{2}}\left[{\int}_{+\pi}^{+\pi }{H}_vd\alpha \right]d\varepsilon ={H}_v{\int}_0^{+\frac{\pi }{2}}\left[{\int}_{-\pi}^{+\pi }d\alpha \right]d\varepsilon . $$
(30)
$$ {M}_v={H}_v{\int}_0^{+\frac{\pi }{2}}\left\{{\left[\alpha \right]}_{-\pi}^{+\pi}\right\}d\varepsilon ={H}_v{\int}_0^{+\frac{\pi }{2}}\left[\pi -\left(-\pi \right)\right]d\varepsilon ={H}_v{\int}_0^{+\frac{\pi }{2}}2\pi d\varepsilon . $$
(31)
$$ {M}_v=2\pi {H}_v\underset{0}{\overset{+\frac{\pi }{2}}{\int }}d\varepsilon =2\pi {H}_v{\left[\varepsilon \right]}_0^{+\frac{\pi }{2}}=2\pi {H}_v\left[\frac{\pi }{2}-0\right]={\pi}^2{H}_v. $$
(32)

Therefore

$$ {H}_v=\frac{M_v}{\pi^2}. $$
(33)

The visible luminous flux received by the sensor due to the radiation coming from elevation angles between ε and ε + , and azimuth angles between α and α + , will be

$$ d{\Phi}_{v\ast }={S}_{\varepsilon\ }d{\Phi}_{v0}={S}_{\varepsilon }{A}_0{dM}_v={A}_0{H}_v{S}_{\varepsilon } d\alpha d\varepsilon =\frac{M_v}{\pi^2}{A}_0{S}_{\varepsilon } d\alpha d\varepsilon . $$
(34)

where

$$ {S}_{\varepsilon} = S\left(\varepsilon \right)=S\left(\frac{\pi }{2}-\varphi \right). $$
(35)

The total visible luminous flux received by the sensor is

$$ {\Phi}_{v\ast }=\iint d{\Phi}_{v\ast }={\int}_0^{+\frac{\pi }{2}}\left[{\int}_{-\pi}^{+\pi}\frac{M_v}{\pi^2}{A}_0d\alpha \right]{S}_{\varepsilon }d\varepsilon . $$
(36)
$$ {\Phi}_{v\ast }=\frac{M_v}{\pi^2}{A}_02\pi {\int}_0^{+\frac{\pi }{2}}{S}_{\varepsilon }d\varepsilon =\frac{M_v}{\pi^2}{A}_02\pi {\int}_0^{+\frac{\pi }{2}}{S}_{\varepsilon }d\varepsilon . $$
(37)

Considering the value of the integral sensitivity

$$ {S}_i\equiv \underset{0}{\overset{+\frac{\pi }{2}}{\int }}{S}_{\varphi }d\varphi =\underset{0}{\overset{+\frac{\pi }{2}}{\int }}{S}_{\varepsilon }d\varepsilon, $$
(38)

the luminous flux can be written as

$$ {\Phi}_{v\ast }=\frac{2{M}_v}{\pi }{A}_0{S}_i=\frac{2{S}_i}{\pi }{\Phi}_{v0}. $$
(39)

For omnidirectional 3D lighting, the directional correction factor is defined by

$$ {\psi}_{\ast}\equiv \frac{\Phi_{v\ast }}{\Phi_{v0}}, $$
(40)

and its value is

$$ {\psi}_{\ast}\equiv \frac{\Phi_{v\ast }}{\Phi_{v0}}=\frac{\frac{2{S}_i}{\pi }{\varPhi}_{v0}}{\Phi_{v0}}=\frac{2{S}_i}{\pi }. $$
(41)

This is the same result obtained in (24). Thus, the directional correction factor for a flat sensor is

$$ {\psi}_{\ast }=\frac{2{S}_i}{\pi }=\frac{2}{\pi }, $$
(42)

and for the optically cosine-compensated sensor

$$ {\psi}_{\ast }=1. $$
(43)

So, summarizing, for omnidirectional 3D lighting the directional correction factor is 2/π in non- compensated sensors, and is 1 in optically compensated sensors, the same result as the omnidirectional 2D lighting.

2.4 General lighting conditions

A sensor receiving a general light will be now studied. The luminous emittance corresponding to elevation angles between ε and ε + , and azimuth angles between α and α + , will be

$$ {dM}_v={H}_v\mathrm{d}\upalpha \mathrm{d}\upvarepsilon, $$
(44)

where H v is the density of emittance, now a variable magnitude that can be expressed as H v  = H v (α, ε). The visible luminous flux received by the sensor due to the radiation coming from elevation angles between ε and ε + ; and azimuth angles between α and α + , is

$$ d{\Phi}_v={A}_0{H}_v\left(\alpha, \varepsilon \right){S}_{\varepsilon }d\alpha\ d\varepsilon . $$
(45)

The total visible luminous flux received by the sensor is

$$ {\Phi}_v=\iint d{\Phi}_v={\int}_0^{+\frac{\pi }{2}}\left[{\int}_{-\pi}^{+\pi }{H}_v\left(\alpha, \varepsilon \right){A}_0d\alpha \right]{S}_{\varepsilon }d\varepsilon . $$
(46)

Calling J v (ε) the density of emittance at a certain elevation ε

$$ {J}_v\left(\varepsilon \right)\equiv {\int}_{-\pi}^{+\pi }{H}_v\left(\alpha, \varepsilon \right)d\alpha, $$
(47)

then eq. (46) can be expressed as

$$ {\Phi}_v={A}_0{\int}_0^{+\frac{\pi }{2}}{J}_v\left(\varepsilon \right){S}_{\varepsilon }d\varepsilon . $$
(48)

For the optically cosine-compensated sensor where S φ  = 1,

$$ {\Phi}_v={A}_0{\int}_0^{+\frac{\pi }{2}}{J}_v\left(\varepsilon \right){S}_{\varepsilon }d\varepsilon ={A}_0{\int}_0^{+\frac{\pi }{2}}{J}_v\left(\varepsilon \right)d\varepsilon ={A}_0{M}_v={\Phi}_{v0}. $$
(49)

Thus, the directional correction factor is

$$ \psi \equiv \frac{\Phi_v}{\Phi_{v0}}=1. $$
(50)

For a general sensor with non-uniform directional sensitivity

$$ {\Phi}_v={A}_0{\int}_0^{+\frac{\pi }{2}}{J}_v\left(\varepsilon \right){S}_{\varepsilon }d\varepsilon . $$
(51)

Calling

$$ {M}_{va}\equiv {\int}_0^{+\frac{\pi }{2}}{J}_v\left(\varepsilon \right){S}_{\varepsilon }d\varepsilon, $$
(52)

equation (51) becomes

$$ {\Phi}_v={A}_0{M}_{va}. $$
(53)

Therefore, the directional correction factor is

$$ \psi \equiv \frac{\Phi_v}{\Phi_{v0}}=\frac{A_0{M}_{va}}{A_0{M}_v}=\frac{M_{va}}{M_v}, $$
(54)

where

$$ {M}_v\equiv {\int}_0^{+\frac{\pi }{2}}{J}_v\left(\varepsilon \right)d\varepsilon, $$
(55)

and M va is the apparent emittance defined in eq. (52). The electric current provided by the photodiode can now be stated as

$$ {I}_P={K}_v{E}_v={K}_v\frac{\Phi_v}{A_0}={K}_v\frac{\psi\ {\Phi}_{v0}}{A_0}={K}_v\psi\ {\mathrm{M}}_v. $$
(56)

2.5 The effect of sensor tilting

Let us now consider that the sensor tilts an angle 휃 with respect to the vertical and study the effect it has on the illuminance measurements (Fig. 5).

Fig. 5
figure 5

Tilted sensor

Obviously, the directional correction factor will now depend on the tilting angle, so

$$ \psi \left(\theta \right)\equiv \frac{M_{va}\left(\theta \right)}{M_v\left(\theta \right)}. $$
(57)

To study this rotation in the 3D space, a floor-fixed axes (X, Y, Z) and a sensor-fixed axes (X ', Y ', Z ') are defined (Fig. 6). The rotation of value θ occurs about the Y = Y ' axis. The direction of a certain light beam is denoted by the vector \( \overrightarrow{P} \), whose coordinates in the floor-fixed axes are \( \overrightarrow{P}=\left[{P}_x,{P}_y,{P}_z\right] \). This vector is projected onto the (X, Y) plane producing the vector \( \overrightarrow{Q} \), whose coordinates in the floor-fixed axes are \( \overrightarrow{Q}=\left[{Q}_x,{Q}_y,0\right] \). The angle α between the vector \( \overrightarrow{Q} \) and the X axis is called the azimuth, in the same way, the angle ε between the vector \( \overrightarrow{P} \) and the (X, Y) plane is called the elevation. In the sensor-fixed coordinate system the azimuth and elevation angles for the vector \( \overrightarrow{P} \) are, respectively, α ' and ε '. Finally, the angle β between the vector \( \overrightarrow{P} \) and the Z ' axis (sensor’s perpendicular) is the angle determining the sensor directional sensitivity.

Fig. 6
figure 6

Rotating the reference system

The luminous emittance must now be measured using the sensor-fixed axes, so that

$$ {M}_v\left(\theta \right)={\int}_0^{\frac{\pi }{2}}{J}_v\left(\varepsilon \right){d\varepsilon}^{\hbox{'}}. $$
(58)

Considering that

$$ {J}_v\left(\varepsilon \right)\equiv \underset{-\pi }{\overset{+\pi }{\int }}{H}_v\left(\alpha, \varepsilon \right)d{\alpha}^{\prime }, $$
(59)

and substituting in (58), it is obtained that

$$ {M}_v\left(\theta \right)={\int}_0^{\frac{\pi }{2}}\left[{\int}_{-\pi}^{+\pi }{H}_v\left(\alpha, \varepsilon \right){d\alpha}^{\hbox{'}}\right]{d\varepsilon}^{\hbox{'}}. $$
(60)

To measure the apparent emittance M va , as it is defined in eq. (52), the sensor directional sensitivity influence is only needed to be added, so that

$$ {M}_{va}\left(\theta \right)={\int}_0^{\frac{\pi }{2}}\left[{\int}_{-\pi}^{+\pi }{H}_v\left(\alpha, \varepsilon \right)S\left(\beta \right){d\alpha}^{\hbox{'}}\right]{d\varepsilon}^{\hbox{'}}. $$
(61)

In these equations the integrals are computed in the sensor-fixed reference system, but the emittance density function H v (α, ε) is given in the floor-fixed axes. It is therefore necessary to express the values of α and ε as a function of α ' and ε ', i.e., to calculate the azimuth and elevation angles in the floor-fixed axes, once their values in the sensor-fixed axes are known (see annexes).

2.6 Determining a map of light sources

In the previous subsections, the correction factor ψ has been derived under different sensor exposure setups to light. Substituting this value in eq. (2), it is possible to obtain the illuminance at a certain spot by measuring the current I p in low cost photodiodes (non-compensated sensors), that is,

$$ {E}_v=\frac{I_p}{K_v\psi }. $$
(62)

Additionally, spectral compensation can also be obtained for this type of sensors as it is described in [29]. So, it is possible to deploy a sensor network, obtaining lighting measurements at different spots in a certain region by the spectral and directional correction of the electric current provided by low-cost sensors (photodiodes). This sensor network allows, firstly, the drawing of an illuminance map for the area covered by the network. Furthermore, as will be seen later, the existence of nearby sensors will significantly reduce the illuminance errors obtained when measuring with a single low cost sensor.

By using a network of light sensors it is possible not only to draw an illuminance map of a certain area, but also to determine the luminous flux emitted by every light source in the region. So, the light sources can be monitored and controlled.

A region lit by n light sources and monitored by a network made up of m sensors will be considered. Let E vi be the illuminance measured by the i-th sensor. Let Φ vj be the luminous flux emitted by the j-th source. The relationship between luminous fluxes and illuminances is actually a relationship between emitted and received powers, taking into account the spectral correction due to the human eye. Therefore there exists a linear relationship that can be written as follows

$$ {E}_{vi}=\sum_{j=1}^n{\lambda}_{ij}\ {\Phi}_{vj}, $$
(63)

where λ ij is the factor of proportionality between the illuminance measured by the i-th sensor and the light flux of the j-th source. In matrix terms the expression takes the form E v  = Λ Φ v , where E v is the illuminance vector as measured by the m sensors, Φ v is the luminous flux vector corresponding to the n light sources, and Λ is the factor of proportionality matrix. The value of Λ can be determined using light sources of known luminous flux and measuring the illuminance at each sensor.

Once the Λ matrix has been obtained, for a certain illuminance measurements vector, it is possible to estimate the luminous flux emitted for every light source. When the number of sensors and sources is the same (m = n) the luminous flux can be obtained by solving the corresponding system of equations, i.e., through the expression Φ v  = Λ −1 E v . If the number of sensors is greater than the number of light sources (m > n) the system of equations is overdetermined (more equations than unknowns). In this case it is possible to reduce the measurement errors by computing the luminous fluxes such as those that minimize the mean square error, i.e., by performing a least squares fitting according to the expression

$$ \min \parallel {\boldsymbol{E}}_{\boldsymbol{v}}-\boldsymbol{\Lambda}\ {\boldsymbol{\Phi}}_{\boldsymbol{v}}{\parallel}^2, $$
(64)

where x denotes the norm of the vector x .

3 Results

3.1 Results on directional correction

The method developed in the previous section has been tested in 3 specific applications using a low cost photodiode [18] in a prototype with a cost of about one dollar. Reverse biased, the current supplied by the device is proportional to the received illuminance, according to eq. (1). In the photodiode datasheet the proportionality factor can be found, showing a value of measured using a type A front light (corresponding to a lamp with a tungsten filament at 2856 K [8]).

$$ {K}_v=135\frac{nA}{lx}, $$
(65)

The first application is to measure the illuminance at a single spot in a laboratory lit by numerous fluorescent lamps (type F light [8]). The measurements obtained with the photodiode for different values of the sensor tilting angle θ, is compared with the results obtained using a mid-range commercial luxmeter [20] with a cost of about one hundred dollar. As the room is illuminated with non-A type lights, which have a different spectral behavior, the illuminance values have to be spectrally corrected with a factor σ, that in the case of the proposed photodiode takes the value σ = 0.112 [29]. On the other hand, as the sensor is receiving light from many directions and it is not optically-compensated, a directional correction factor ψ must also be included. Finally, the illuminance measurement is derived from the electric current produced by the photodiode, using the equation

$$ {E}_v=\frac{I_p}{K_v\ \sigma\ \psi }. $$
(66)

A 3D omnidirectional lighting will be firstly considered. According to eqs. (41) and (38) the directional correction factor can be computed as

$$ \psi =\frac{2}{\pi }{\int}_0^{+\frac{\pi }{2}}{S}_{\varphi }d\varphi, $$
(67)

where S φ represents the photodiode directional sensitivity and its value, as provided by the manufacturer in the datasheet, is depicted in Fig. 7.

Fig. 7
figure 7

Photodiode directional sensitivity

Using eq. (67) to integrate this figure, the value for the directional correction factor ψ = 0.134 is obtained and, therefore, the factor to convert the photodiode electric current to illuminance becomes

$$ \frac{E_v}{I_p}=\frac{1}{K_v\ \sigma\ \psi }=493.6\frac{lx}{\mu A}. $$
(68)

Measuring the electric current produced by the photodiode at different inclination angles (angle with the vertical), and converting these values to an illuminance measurement according to (68), the result shown in Fig. 8 is obtained. In this figure the illuminance values obtained using the photodiode and the calibrated luxmeter (which is used as the right value) are compared. The difference between these two values is the error in the illuminance as measured by the photodiode.

Fig. 8
figure 8

Illuminance vs. inclination angles assuming 3D omnidirectional lighting

For relatively small inclination angles (photodiode exposed to direct illumination), errors are greater than in the case of indirect light. In most of the measurements, the error does not exceed 20%, although in a few cases the error is significantly higher and can be up to 100% (see Fig. 9). These results are similar to the 57% maximum error reported for an RGB photosensor [7], and much better than the 152% error stated for three-color sensors equipped device [11]. The proposed method equals or supersedes previously reported research using less complex (and cheaper) sensors.

Fig. 9
figure 9

Illuminance errors assuming 3D omnidirectional lighting

Although having a 100% maximum illuminance error is not a good result for laboratory measurements, it could be acceptable for applications where the goal is to control the lighting subjective perception. In fact, the relationship between illuminance level and brightness (subjective perception) is logarithmic [24], following Fechner’s Law [25]. To reflect this brightness subjective perception notion, the illuminance relative to the visibility threshold E vt is defined

$$ {E}_{vt}=\frac{E_v}{E_0}, $$
(69)

where E 0 denotes the minimum illuminance that the human eye is able to detect (illuminance threshold). An accepted value of this constant is E 0 = 10−7.5 lx [27]. The logarithmic effect of Fechner’s law is achieved by expressing E vt in decibels. Using this procedure the brightness value is reflected in Fig. 10 and its errors in Fig. 11. As it can be seen, the error of the proposed method in brightness terms (difference between photodiode and luxmeter) is about 3% or below for all inclination angles.

Fig. 10
figure 10

Brightness vs. inclination angles assuming 3D omnidirectional lighting

Fig. 11
figure 11

Brightness errors assuming 3D omnidirectional lighting

To obtain the above results it has been assumed that the sensor is exposed to a homogeneous 3D omnidirectional light. Now this assumption will be disregarded and the problem for the general lighting conditions described in [29] will be addressed. Using eqs. (57), (60) and (61) the directional correction factor for every inclination angle is computed. The illumination errors obtained employing these values are shown in Fig. 12. It can be seen that taking into account the spatial light distribution does not improve the measurement, with similar or slightly worse error values than the simplified homogeneous 3D omnidirectional light assumption.

Fig. 12
figure 12

Illuminance errors assuming general lighting conditions

As it can be seen, the high sensor directionality makes the illuminance measurement obtained with the photodiode very sensitive to the inclination angle, producing great differences in illuminance for small angle deviations. For this reason, and also for spectral compensation, commercial luxmeters commonly use optical filters and lenses or, the most advanced ones, an array of photodetectors (e.g. 256). But to overcome the directionality drawback a simpler approach is proposed: to use a slightly more complex sensor that, instead of using a single photodiode, it is made up of a few photodetectors with different inclinations. Specifically, a 5 sensor arrangement has been tested where each photodiode is tilted 30° apart from its closest neighbors. Figure 13 shows the measurements and its error obtained for this arrangement, considering the simplified assumption of 3D omnidirectional light.

Fig. 13
figure 13

Illuminance vs. inclination angles using a 5 photodiode sensor

It is noted that the measurement maximum error is reduced to about a 30%, a very remarkable improvement since it means reducing errors to less than a third of its original value. In terms of brightness, the measurement error is less than 2%, which also represents a clear enhancement over the one photodiode arrangement and also over the results in [7, 11].

3.2 Results on determining the map of illuminance

In the previous section the feasibility of using a low cost sensor in measuring the illuminance at a single spot has been explored. A second and probably more promising application of the proposed technique is to determine the illumination map of a certain region. For this reason, a laboratory room of approximately 50 m2, illuminated by numerous ceiling-mounted fluorescent lamps, has been used for testing purposes. The working region covers a 7 m × 4 m area, avoiding the room edges. To measure the illuminance level at each room spot an 8 × 5 low-cost sensors grid has been arranged, where the sensors are 1 m evenly spaced in each direction. The sensor grid was modeled by moving just one sensor from one spot to another and recording the measurements at every point.

In this application the same previously described sensor has been used and, so, its electric current measurements are converted to illuminance levels using the same spectral correction (σ = 0.112) and, also assuming 3D omnidirectional light, the same directional correction factor (ψ = 0.134). For the sake of measurement comparison, a commercial luxmeter [20] is also used at the same spots. In the region between measurement points, the illuminance is estimated using a two-dimensional linear interpolation. The results of this method are shown in Fig. 14 where axes represent the 2D coordinates (x, y) in meters (m).

Fig. 14
figure 14

Illuminance map

Looking at the map provided by the luxmeter, it can be said that, in general, the illuminance of the room is quite uniform with a value close to 600 lx, although slightly lower at the edges and, especially, at the corners. Considering now the map obtained with the photodiode grid, the result does not significantly differ from the previous one, although close to 1000 luxes illuminance peaks appear. These peaks arise where the photodiodes are illuminated by front light, i.e., where they are located just below a light source. The Fig. 15 depicts the measurement errors at each spot, both in illuminance and brightness terms. In these error maps the fact that the maximum errors are locally concentrated is even more clearly grasped, again corresponding to spots where the photodiodes are illuminated by front light.

Fig. 15
figure 15

Illuminance and brightness error maps

Not only it is interesting to know where the maximum errors are located, but also the knowledge of statistical errors distribution. For this purpose, a 1000 × 1000 (106) spots grid is employed and the errors for every location is obtained. The histogram of these errors is displayed in Fig. 16 for both illuminance and brightness.

Fig. 16
figure 16

Illuminance and brightness error histograms

An alternative way to represent the distribution of illuminance and brightness errors is by the mean of their box plot (Fig. 17). The obtained result shows a 35% as the central value (median) for the illuminance error, with outlier values of up to 200%.

Fig. 17
figure 17

Illuminance and brightness error box plots

These results are consistent with the errors measured in the subsection 3.1 when a non-tilted photodiode (θ = 0°) is considered. In order to reduce the measurement errors, the same procedure suggested above can be used. i.e., using at each measurement spot not only a single photodiode but a short set of them tilted at different angles. According to the results obtained in the previous subsection, the errors should be reduced to a third of their original value using this procedure.

However, another approach is also possible without increasing the number of photodiodes (and the sensor cost). In most applications illuminance does not sharply change from one location to the next one so, it is possible to reduce the errors averaging the measure values at each point with those of its closest neighbors, i.e., by performing a spatial low-pass filtering of the illuminance. Thus a 3 × 3 matrix is arranged using the illuminance values of the point to be measured (central element of the matrix) and its 8 nearest neighbors, where the matrix mean value is computed. The result is depicted in Fig. 18. In this graph it becomes clear that the illuminance measurement error not only remarkably decreases but also its worst values are located in the room edges and, particularly, in their corners. In those regions, spatial filtering is less efficient because a smaller number of neighbors are available.

Fig. 18
figure 18

Illuminance and its error using spatial low-pass filtering

The fact that the room central region has significantly smaller error values is clearly shown in Fig. 19 where the area with errors less than 20% is filled out in blue. Only the edges, and mainly the corners, have values exceeding this error limit.

Fig. 19
figure 19

Illuminance error levels using spatial low-pass filtering

The illuminance error statistical distribution is represented in Fig. 20, both through its histogram and its box plot. A quite important decrease in the illuminance errors can be observed now obtaining about one-third of the original error. This reduction is even more remarkable for the maximum error values.

Fig. 20
figure 20

Illuminance error statistical distribution using spatial low-pass filtering

In applications where the subjective perception of the illumination level is the relevant issue, the brightness measures as defined in (69) can be used. The brightness measurement error statistical distribution is depicted in Fig. 21 where it becomes clear that, in terms of brightness, the measurement error does not exceed in the 2.5%, in any case.

Fig. 21
figure 21

Brightness error statistical distribution using spatial low-pass filtering

3.3 Results on obtaining a map of light sources

In the previous subsection it has been shown that it is possible to draw an illuminance map of a certain area from the local values measured by a low-cost sensor network. But additionally, properly calibrating the sensor network, the luminous fluxes emitted by the light sources can also be obtained.

To illustrate this third application, a prototype experiment has been designed where a surface was illuminated using 3 type A light sources: incandescent light bulbs powered at 220 V, with an electric nominal power of 60 watts and a luminous flux of 800 lm. The luminous flux at every lamp is controlled modifying the supply voltage. The surface’s illuminance is measured using a 3 × 3 sensors grid where there is 1 m evenly spaced in each direction. The sensor grid can be built up placing 9 sensors at the 9 spots in the grid but in the prototype it has been modeled by moving just one sensor from one spot to another and recording the measurements at every point.

The relationship between supply voltage and electric power is quadratic, but its conversion to luminous flux is not straightforward. The nominal 800 lm value is obtained when the 60 W nominal electric power is dissipated, i.e., when the supply voltage is 220 V. When other non-nominal electric power values are considered, the filament temperature, the radiant flux and the light spectrum non-linearly change. For this reason, firstly, experimentally obtaining the relationship between the dissipated electric power and the luminous flux emitted by the lamp is needed.

The luminous flux (Φ v ) emitted by a light source, given a certain supply voltage, is computed by measuring the illuminance (E v ) found at a certain point and comparing this value with the illuminance (E vn ) obtained at that same point when the lamp is powered at its nominal value and its nominal luminous flux (Φ vn  = 800 lm) is emitted. This procedure is summarized in the following equation

$$ {\Phi}_v=\frac{\Phi_{vn}}{E_{vn}}\ {E}_v. $$
(70)

The experimental relationship between the supply voltage (V) and the luminous flux emitted by the lamp is obtained in the form

$$ {\Phi}_v={\Phi}_v(V). $$
(71)

The result is shown in Fig. 22 where the experimental relationship between the luminous flux and the electric supply is fitted to a voltage quadratic function (linear for electric power). A very good fitting is obtained except for the lowest values of the power supply.

Fig. 22
figure 22

Experimental luminous flux as a function of the electric supply

The next step is experimentally determining the value of the λ ij coefficients in eq. (63). For this purpose every light source is turned off, except the j-th lamp which is connected to a known supply voltage (V j ). Now using (71) the luminous flux (Φ vj ) emitted by that light source is computed. Then, the illuminance level (E vi ) at every i-th point is measured using the sensor network. By repeating this procedure for every sensor and every light source, the elements in the Λ matrix can be obtained by

$$ {\lambda}_{ij}=\frac{E_{vi}}{\Phi_{vj}}. $$
(72)

Now applying the Least-Mean-Square algorithm to (64) it is possible to estimate the luminous flux (Φ v ) emitted by the sources knowing the illuminance measurements (E v ) obtained by the sensor network. Applying this procedure to various configurations of the light sources it is possible to compute the error in the estimation of the luminous flux of each source.

A set of experiments have been performed using different light source configurations and the error on luminous flux estimation has been measured. Figure 23 depicts the statistical box plots for the luminous flux estimation errors both when using a commercial luxmeter and the low cost photodiode. In this later case, about 25% maximum error is obtained, with an error central value of 10%. This figure is quite satisfactory for many applications, furthermore considering that light sources commonly are only in two states: on or off.

Fig. 23
figure 23

Luminous flux estimation error

4 Discussion and conclusions

From the theoretical and experimental work described in previous sections it follows that it is possible to use a simple (and inexpensive) sensor (a photodiode) as a light meter capable of providing fairly accurate illuminance and brightness measurements. Low-cost sensors, such as the one proposed, require directional correction whose expressions have been derived throughout the paper.

The error analysis has shown that a directional correction assuming a 3D omnidirectional lighting equals or even supersedes the results obtained in previous research, although much simpler sensors are used in the proposed approach. On the other hand, increasing the number of photodiodes in every sensor (5 photodiodes per sensor are proposed) the illuminance measurement error can be further reduced while paying a slight increase in sensor cost.

It has also been shown that using low cost sensor networks feasibly allows drawing illuminance maps while the errors can be significantly reduced by spatial low-pass filtering algorithms.

It has been shown that using a low cost sensor network it is possible to estimate the luminous flux emitted by a set of light sources in a certain area while maintaining the estimation error at a low level for most practical applications.

The paper results have been obtained using a lab prototype. Some issues should be addressed for its use in real situations such as the power supply or the connectivity concerns.