1 Introduction

There are many practical situations where distributed lighting monitoring systems are used for improving illuminance control, human health and comfort, industrial security or power efficiency. In those cases a common approach is to deploy a Sensor Network which can integrate, not only light information, but also some other magnitudes or pieces of information (sounds, images, pollution levels, temperature…). The communication of all these sensors are usually achieved through wireless connections creating what it is called a Wireless Multimedia Sensor Networks (WMSN).

The increasingly use of WMSN has been enabled by the availability, particularly in recent years, of sensors that are ever smaller, cheaper, and more intelligent. These sensors are usually equipped with wireless interfaces with which they can communicate with one another to build up a network. The design of a WMSN depends significantly on the application, and it must consider factors such as the environment, the application’s design objectives, cost, hardware, and system constraints [17, 32].

One quite widespread application of this technology is street light control [25]. Some of these systems are based just on energy use monitoring [13], although most of them also incorporate light sensors to detect human presence (light needs) and illuminance level (light achieved) [20]. In these types of applications, several technical problems have to be addressed such as the routing through the WMSN [19] or the dependence and compensation of the sensor positions [40].

Another application where these technologies are extensively used is light control in smart buildings, usually also integrating much different information [12]. In regards to the topics covered in the paper, recent developments have tackled the energy cost reduction when used in the Real Time Pricing (RTP) framework of smart grids [14], the adaptive behavior of the system through self-calibrating and self-learning [39], and the sensor’s cost concern when scaling the size of the network [8].

A further very promising field, although not yet exploited enough, is theaters and the filmmaking industry where deploying WMSN technologies could be very useful to integrate information provided for different classes of sensors integrating the simultaneous use of video streams, images, audio streams and scalar data. For this application, lighting control is extremely important [2, 3]. Computerized control systems for lights in films and theaters are a well-established technology where even several commercial products are available [4, 9]. Lighting control on stages has also attracted the attention of recent research work which has addressed different aspects of the problem [18, 33, 36, 38].

However most of these current systems only provide for actuation and do not take advantage of sensor data to improve the control. It is important to know and use the live light information from light sensors deployed on the set. Real-time data accounts for how features, like light intensity and color temperature, change over time and deployments due to filament aging, supply voltage variation, changes in fixture position, color filters, etc. Without real-time measurements of light, it is time-consuming to maintain desired intensities of lights for certain areas across many venues and over long time periods. Light intensities can be measured accurately by currently commercially available handheld manual light meters [21, 35]. However, these devices have not been incorporated in systems supporting automatic light control and must be manually moved through different points in space. Cameras can provide only reflected light intensity, so we focus on incident light in order to have measurements that are independent of surfaces and materials.

The process of checking that adequate lighting conditions are met is a well-established task and there exist reliable measuring instruments (light meters) for a medium price. However the continuous monitoring of the adequacy of the light level requires distributed light sensors all through the stage and its subsequent integration into a control system, as it has been proposed in [29] for the entertainment and media production applications. Obviously, a better light monitoring and control is achieved where a higher spatial resolution is used, that is, where more light spots are measured.

Light sensing is an evolving field with new devices continuously being available. In these advances different features are being addressed like noise immunity [1], mechanical flexibility [22] or suitability to be integrated in WSN [5]. But in cases when the number of spots to be monitored increases, the cost of the sensors becomes one of the key factors.

A good light sensor device should include two important elements: a homogeneous directional response, whatever the direction in which the light is received from (flat directional sensitivity); and fitting the spectral response of the human eye. Most of the light sensors do not comply with these features and often require directional correction devices (usually superimposed lenses) and spectral compensation (optical filters and/or the combined use of different types of sensors). These correction mechanisms increase the price of each sensor.

In this work a different approach is proposed where it is shown that light measurement can be corrected both spectrally and directionally by numerical methods without requiring any additional hardware at each sensor. This allows low cost distributed systems with numerous spots of light measurements in a theater installation or film set. This paper describes the process and provides the necessary correction values for different lighting conditions. Similar solutions have been reported for calibration of wearable light exposure devices [10, 24] where only one light spot is needed and more sophisticated light sensors are used.

In section 2, a particular light sensor is selected and its performance measuring light intensity is derived. Section 3 addresses the problem of the sensor’s spectral sensitivity, obtaining the analytical background for spectral correction. In section 4 the photodiode’s behavior under different light direction is tackled, presenting the method for directionally correcting the sensor’s measurements. The application of the proposed system is undertaken in section 5, where the main results are also described. Eventually, section 6 emphasizes the main paper’s contributions and conclusions.

2 The sensor

For an easier understanding of the proposed method we will make our description, not only for an abstract sensor, but also applying the results to a specific device. For measuring light intensity we will use a photodiode OSRAM SFH 213 as the sensor device. We look for the relationship between light intensity and electric current in lighting conditions with different spectral compositions and various directional distributions. In a photodiode, the equation governing the behavior of the device [27] includes a term which is a function of the light intensity:

$$ {i}_d={I}_s\left[{e}^{\frac{v_d}{n{V}_T}}-1\right]-{I}_P $$
(1)

where i d is the current in the diode under a voltage v d . The parameter I s is the reverse-bias saturation current. The parameter V T is the thermal voltage (a constant at a certain temperature). The parameter n is the emission coefficient (in the range 1 ≤ n ≤ 2). In (1) IP = K v E v , where E v denotes the illuminance, i.e., the visible light power received per unit area in the diode, and K v is a constant for every photodiode.

The electric current in the photodiode when it is reverse biased (v d  < 0) is a good light intensity meter because the term

$$ {I}_s\left[{e}^{\frac{v_d}{n{V}_T}}-1\right] $$
(2)

is negligible compared to I p . In these cases

$$ \left|{i}_d\right|\approx {I}_P={K}_v{E}_v $$
(3)

In the case of the used photodiode it is possible to determine the proportionality constant from the values (I R  = 1nA; I p  = 135μA @ E v  = 1000lx) obtained in its data sheet [28]. Indeed, we start by determining the value of I s .

$$ {I}_R\equiv {\left.{i}_d\right|}_{v_d=-\infty; {E}_v=0}={I}_s\left[{e}^{\frac{-\infty }{n{V}_T}}-1\right]-{K}_v\cdotp 0=-{I}_s\to {I}_s=-{I}_R=1\ nA $$
(4)

Additionally

$$ {K}_v=\frac{I_P+{I}_s}{-{E}_v}=\frac{-135\ \mu A+1\ nA}{-1000\ lx}=135\frac{nA}{lx} $$
(5)

These measurements are made with a standard type A light (corresponding to an incandescent lamp) and in a front direction with respect the photodiode. In the following sections the relationship between light intensity and electric current will become clear, in cases of lighting with different spectral compositions and different directional distributions.

3 Spectral sensitivity

The radiant flux Φe is defined as the total power emitted by a light source or received by a light sensor. On the other hand, the luminous flux Φv is defined as the visible power emitted by the source or received by the sensor. Illuminance E v (lux) is also defined as the visible power received per unit area. For sensor with area A

$$ {E}_v=\frac{\Phi_{\mathrm{v}}}{A} $$
(6)

The spectral response of the human eye is not equal for all wavelengths (colors) of light. Although it can vary slightly from one individual to another and depends on lighting conditions, the photopic Vλ (in daylight conditions) response is standardized in [16]. If the spectral density of the radiant flux emitted by certain type A light source is denoted as Φ, the spectral density Φ of luminous flux (perceived by human eye) will be

$$ {\Phi}_{\mathrm{v}\uplambda}={\Phi}_{\mathrm{e}\uplambda}{\mathrm{V}}_{\uplambda} $$
(7)

If a sensor with spectral sensitivity S λ is used to measure visible light, the spectral density of luminous flux measured by the sensor will be

$$ {\Phi}_{\mathrm{s}\uplambda}={\Phi}_{\mathrm{e}\uplambda}{S}_{\lambda } $$
(8)

The function of the spectral sensitivity of the sensor can be determined from its data sheet. A standard light source is one that emits light with a certain (standardized) spectral distribution. For example, a standard type A light [15] is the light corresponding to a lamp with a tungsten filament at 2856°K of temperature corresponding to a standardized spectral distribution ΦeAλn. Any type A light source, will have a distribution ΦeAλ with the same shape as the standard one, but multiplied by a constant L A , that is,

$$ {\Phi}_{\mathrm{eA}\uplambda}={L}_A{\Phi}_{\mathrm{eA}\uplambda \mathrm{n}} $$
(9)

Other standard light sources of interest are type D lights [15], corresponding to natural daylight, and the family of type F lights [6] corresponding to different types of fluorescent lights (F1 to F12). Suppose we have a standard type A light source with a spectral distribution of radiant flux given by (9). The radiant flux emitted by the type A light source will be

$$ {\Phi}_{\mathrm{eA}}={\displaystyle {\int}_0^{\infty }{\Phi}_{\mathrm{eA}\uplambda}d\lambda ={L}_A}{\displaystyle {\int}_0^{\infty }{\Phi}_{\mathrm{eA}\uplambda \mathrm{n}}d\lambda ={L}_A{\Phi}_{\mathrm{eA}\mathrm{n}}} $$
(10)

an expression in which

$$ {\Phi}_{\mathrm{eA}\mathrm{n}}\equiv {\displaystyle {\int}_0^{\infty }{\Phi}_{\mathrm{eA}\uplambda \mathrm{n}}d\lambda =4.4396\cdotp {10}^5\ watts} $$
(11)

value obtained integrating the curve of the spectral distribution of radiant flux of type A light. Analogously, considering the luminous flux (perceived by the human eye), ΦvA = L A ΦvAn, where

$$ {\Phi}_{\mathrm{vA}\mathrm{n}}\equiv {\displaystyle {\int}_0^{\infty }{\Phi}_{\mathrm{vA}\uplambda \mathrm{n}}d\lambda =1.0788\cdotp {10}^4 lm} $$
(12)

And considering the flux perceived by the sensor, ΦsA = L A ΦsAn, where

$$ {\Phi}_{\mathrm{sA}\mathrm{n}}\equiv {\displaystyle {\int}_0^{\infty }{\Phi}_{\mathrm{sA}\uplambda \mathrm{n}}d\lambda =9.7898\cdotp {10}^4\ watts.} $$
(13)

Values of Φ vAn and Φ sAn are obtained integrating the corresponding curves of the spectral distribution of a type A light and its perception by the eye and the sensor. It is called normalized spectral sensitivity, in this case for a type A light, the value

$$ {\upeta}_{\mathrm{A}}\equiv \frac{\Phi_{\mathrm{sA}}}{\Phi_{\mathrm{vA}}} $$
(14)

From the above expressions it is possible to develop and get that

$$ {\upeta}_{\mathrm{A}}=\frac{\Phi_{\mathrm{sAn}}}{\Phi_{\mathrm{vAn}}} $$
(15)

In the case of our sensor, normalized spectral sensitivity to type A light value is

$$ {\upeta}_{\mathrm{A}}=\frac{\Phi_{\mathrm{sAn}}}{\Phi_{\mathrm{vAn}}}=9.07 $$
(16)

From the above expressions it is possible to deduct that

$$ {\Phi}_{\mathrm{vA}}=\frac{\Phi_{\mathrm{sA}}}{\upeta_{\mathrm{A}}}. $$
(17)

From (3) we can see that the current supplied by the photodiode in the case of excitation with light A will be

$$ {I}_{PA}={K}_v{E}_{vA}=\frac{K_v}{\upeta_{\mathrm{A}}}\cdotp \frac{\Phi_{\mathrm{sA}}}{A} $$
(18)

Calling \( {K}_s\equiv \frac{K_v}{\upeta_{\mathrm{A}}} \), we can substitute in the above expression and obtain finally that

$$ {I}_{PA}={K}_s\frac{\Phi_{\mathrm{sA}}}{A} $$
(19)

In the case of our sensor, the value of the constant K s is

$$ {K}_s\equiv \frac{K_v}{\upeta_{\mathrm{A}}}=14.88\frac{nA}{\frac{watt}{m^2}} $$
(20)

Suppose now that we have a generic light source with a spectral distribution of radiant flux given by Φ. The luminous flux (perceived by the human eye) will be

$$ {\Phi}_{\mathrm{v}}={\displaystyle {\int}_0^{\infty }{\Phi}_{\mathrm{e}\uplambda}{V}_{\lambda }d\lambda } $$
(21)

Meanwhile the flux perceived by the sensor will be

$$ {\Phi}_{\mathrm{s}}={\displaystyle {\int}_0^{\infty }{\Phi}_{\mathrm{e}\uplambda}{S}_{\lambda }d\lambda } $$
(22)

It is called normalized spectral sensitivity for any type of light

$$ \upeta \equiv \frac{\Phi_{\mathrm{s}}}{\Phi_{\mathrm{v}}} $$
(23)

The current supplied by the photodiode in the case of a generic excitation light will be

$$ {I}_P={K}_s\frac{\Phi_{\mathrm{s}}}{A}={K}_s\frac{{\upeta \Phi}_{\mathrm{v}}}{A}. $$
(24)

The relationship between the current supplied by the photodiode when excited with generic light and when excited with type A light is called spectral correction factor σ, assuming that both are illuminated with the same luminous flux. This relationship can be expressed as

$$ \upsigma \equiv \frac{I_P}{I_{PA}}=\frac{K_s\frac{{\upeta \Phi}_{\mathrm{v}}}{A}}{K_v\frac{\upeta_{\mathrm{A}}{\Phi}_{\mathrm{v}}}{A}}=\frac{\upeta}{\upeta_{\mathrm{A}}} $$
(25)

This value, properly calculated, can be used to determine the current provided by the photodiode

$$ {I}_P=\upsigma\ {I}_{PA}=\upsigma\ {K}_v{E}_v $$
(26)

Applying these criteria to the sensor illuminated with a type D (daylight) light, we find that its normalized spectral sensitivity is

$$ {\upeta}_{\mathrm{D}}\equiv \frac{\Phi_{\mathrm{sD}}}{\Phi_{\mathrm{vD}}}=\frac{{\displaystyle {\int}_0^{\infty }}{\Phi}_{\mathrm{eD}\uplambda}{S}_{\lambda }d\lambda }{{\displaystyle {\int}_0^{\infty }}{\Phi}_{\mathrm{eD}\uplambda}{V}_{\lambda }d\lambda }=2.72 $$
(27)

This value is obtained integrating the corresponding curves of the spectral distribution of a D light and its perception by the eye and the sensor. The spectral correction will therefore be

$$ {\upsigma}_{\mathrm{D}}=\frac{\upeta_{\mathrm{D}}}{\upeta_{\mathrm{A}}}=0.30 $$
(28)

Similarly, considering the case of a light sensor illuminated by a type F1 light (fluorescent light), we find that its normalized spectral sensitivity is

$$ {\upeta}_{\mathrm{F}}\equiv \frac{\Phi_{\mathrm{sF}}}{\Phi_{\mathrm{vF}}}=\frac{{\displaystyle {\int}_0^{\infty }}{\Phi}_{\mathrm{eF}\uplambda}{S}_{\lambda }d\lambda }{{\displaystyle {\int}_0^{\infty }}{\Phi}_{\mathrm{eF}\uplambda}{V}_{\lambda }d\lambda }=1.016 $$
(29)

This value is obtained integrating the corresponding curves of the spectral distribution of type F1 light and its perception by the eye and the sensor. The spectral correction will therefore be

$$ {\upsigma}_{\mathrm{F}}=\frac{\upeta_{\mathrm{F}}}{\upeta_{\mathrm{A}}}=0.112 $$
(30)

For other illuminations of type F family lights, the results for the normalized spectral sensitivity and the spectral correction are shown in the following table.

 

F 1

F 2

F 3

F 4

F 5

F 6

F 7

F 8

F 9

F 10

F 11

F 12

η F

1.016

0.966

0.956

0.972

0.985

0.927

1.255

1.412

1.363

0.988

1.002

1.030

σ F

0.112

0.107

0.106

0.107

0.109

0.102

0.138

0.156

0.150

0.109

0.111

0.114

4 Directional sensitivity

In the data sheet of the sensor which has allowed us to obtain the relationship between light intensity and electric current two considerations are made: that light is of type A and falling on the photodiode in a front direction. The first of these issues has been addressed in the previous section. In this section the consequences of light falling on the sensor not frontally will be discussed.

Suppose that a sensor is illuminated with a light radiation of a certain source. M v denotes the illuminance (lumen/m2), that is, the light power (visible) per unit area emitted by the source. Φv denoted the luminous flux (visible) received by the sensor. For a sensor with an area A 0 subjected to a uniform perpendicular illuminance, the relationship between the two magnitudes is

$$ {\Phi_{\mathrm{v}}}_0={M}_v{A}_0 $$
(31)

For the case in which the sensor is subjected to a uniform lighting luminous emittance inclined at an angle φ relative to the perpendicular, the relationship becomes [23]

$$ {\Phi_{\mathrm{v}}}_{\upvarphi}={M}_v{A}_0\ cos\varphi ={\Phi_{\mathrm{v}}}_0\ cos\varphi $$
(32)

It is called directional sensitivity of the sensor the expression

$$ {\mathrm{S}}_{\upvarphi}=\mathrm{S}\left(\upvarphi \right)\equiv \frac{{\Phi_{\mathrm{v}}}_{\upvarphi}}{{\Phi_{\mathrm{v}}}_0} $$
(33)

In this case

$$ {\mathrm{S}}_{\upvarphi}= \cos \upvarphi $$
(34)

known as the Lambert’s cosine law [31]. With the above definitions we can write

$$ {\Phi_{\mathrm{v}}}_{\upvarphi}={S}_{\varphi }{\Phi_{\mathrm{v}}}_0 $$
(35)

It can be shown similarly (the complete demonstration is omitted for space reasons) that expressions allowing one to calculate the correction factor with any lighting conditions (Fig. 1) and with a sensor (blue segment in the figure) inclined an angle θ, are the following:

$$ \psi \left(\theta \right)=\frac{M_{va}\left(\theta \right)}{M_v\left(\theta \right)} $$
(36)
Fig. 1
figure 1

Room’s and sensor’s reference systems

Where

$$ {M}_v\left(\theta \right)={\displaystyle {\int}_0^{\frac{\pi }{2}}\left[{\displaystyle {\int}_{-\pi}^{+\pi }{H}_v\left(\alpha, \varepsilon \right)d\alpha^{\prime }}\right]d\varepsilon^{\prime }} $$
(37)
$$ {M}_{va}\left(\theta \right)={\displaystyle {\int}_0^{\frac{\pi }{2}}\left[{\displaystyle {\int}_{-\pi}^{+\pi }{H}_v\left(\alpha, \varepsilon \right)S\left(\beta \right)d\alpha^{\prime }}\right]d\varepsilon^{\prime }} $$
(38)

and H v (α, ε) is the density of emittance for a certain azimuth α and elevation ε into the room’s reference system (X, Y, Z) defined as

$$ {H}_v\left(\alpha, \varepsilon \right)\equiv \frac{d{E}_v}{d\alpha\ d\varepsilon } $$
(39)

The variable β is the angle between the sensor plane and the light.

In these expressions a change of variables is needed to express all the elements as a function of the azimuth α ′ and elevation ε ′ into the sensor’s reference system (X ′, Y ′, Z ′). The change of variables is, after some basic trigonometric operations,

$$ \alpha = arctg\frac{cos\varepsilon^{\prime } sen\alpha^{\prime }}{cos\varepsilon^{\prime } cos\alpha^{\prime } cos\theta + sen\varepsilon^{\prime } sen\theta} $$
(40)
$$ \varepsilon = arctg\frac{- cos\varepsilon^{\prime } cos\alpha^{\prime } sen\theta + sen\varepsilon^{\prime } cos\theta}{\sqrt{{\left( cos\varepsilon \prime cos\alpha \prime cos\theta + sen\varepsilon \prime\ sen\theta\ \right)}^2 + {\left( cos\varepsilon \prime sen\alpha \prime \right)}^2}} $$
(41)
$$ \beta =\frac{\pi }{2}-\varepsilon^{\prime } $$
(42)

5 Application and results

Results of previous sections have been applied to a specific situation in testing conditions. The measurements were performed in a hall with all the (fluorescent) lights on, and doors and windows closed. The photodiode under study has been used as light sensor. It was introduced in a long, narrow opaque cylinder that gives an angular resolution of approximately ϕ = 3º(±1.5º). With this reduced opening angle, the sensitivity of the photodiode can be considered constant. The device was placed just below a fluorescent lamp, with two additional lamps seen by the sensor with inclinations of 30° to the left and 40° to the right. The sensor configured in that way had been oriented on a set of angles of azimuth and elevation, covering the entire space. The value provided by the sensor at each position was an electric current. The measurements obtained for an azimuth α = 0 and for a variable inclination θ are shown in Fig. 2. The three peaks in the graphic correspond to the three closest lamps on the ceiling above the sensor.

Fig. 2
figure 2

Measurements of the photodiode current for various inclinations (α = 0)

Similarly, Fig. 3 shows the measurements of electric current obtained by the sensor depending on the elevation ε for different values of azimuth (α).

Fig. 3
figure 3

Measurements of the photodiode current for various elevations (variable azimuth)

Figure 4 performs a 3D representation of the measurements obtained by the sensor for the entire hall space. If this information about the light spatial distribution is not available, it can be estimated for the measurements of a grid of sensors [7, 26, 34].

Fig. 4
figure 4

Measurements obtained by the sensor (3D representation)

For a certain elevation ε and azimuth α, the relationship between measures of electric current and lighting intensity can be derived from (15) and (28)

$$ {H}_v\left(\alpha, \varepsilon \right)=\frac{I_P\left(\alpha, \varepsilon \right)}{\sigma\ {K}_v\ {\phi}^2} $$
(43)

expression in which ϕ represents a small opening angle on the measure of the sensor where H v can be considered constant. In order to experimentally determine the values of the spectral correction (σ) and directional (ψ) factors, a reliable measure of the illumination values for each angle is needed. For this purpose we use a commercial PCE-172 luxmeter [30]. This device incorporates the cosine correction (uniform directional sensitivity); and it has a spectral response adjusted to the normalized photopic response. With this luxmeter, lighting for different angles is determined. Substituting (32) in (26), it can be written that

$$ {M}_v\left(\theta \right)=\frac{1}{\sigma\ {K}_v\ {\phi}^2}{\displaystyle {\int}_0^{\frac{\pi }{2}}\left[{\displaystyle {\int}_{-\pi}^{+\pi }{I}_P\left(\alpha, \varepsilon \right)d\alpha^{\prime }}\right]d\varepsilon^{\prime }} $$
(44)

expression in which the values of I P (α, ε) (experimentally determined) are known; the value of K v (from the data sheet of the sensor; see previous sections); and also the value of ϕ (opening angle), geometrically determined from the dimensions of the cylinder in which the photodiode is introduced. The only unknown value is the coefficient of spectral correction σ. The value of the emittance without spectral correction, denoted \( {\tilde{M}}_v \)

$$ {\tilde{M}}_v\left(\theta \right)=\frac{1}{\ {K}_v\ {\phi}^2}{\displaystyle {\int}_0^{\frac{\pi }{2}}\left[{\displaystyle {\int}_{-\pi}^{+\pi }{I}_P\left(\alpha, \varepsilon \right)d\alpha^{\prime }}\right]d\varepsilon^{\prime }} $$
(45)
$$ {M}_v\left(\theta \right)=\frac{1}{\sigma\ }{\tilde{M}}_v\left(\theta \right) $$
(46)

In Fig. 5 both the experimental results obtained using the luxmeter are presented, and the theoretical values of M v (θ) derived from (35) for different values of σ.

Fig. 5
figure 5

Illuminance depending on the inclination: experimental and theoretical values

As shown in the graph, a spectral correction factor value σ = 0.17 achieves a good fit between experimental and theoretical values. However, such adjustment can be optimized by numerical methods. Indeed, let’s denominate E v (θ i ) the experimental illuminance measured by the light meter for an inclination θ i ; and let’s denominate M v (θ i ) the theoretically calculated emittance for this same angle θ i , with N inclinations angles being considered. The mean square error obtained for a given value of the spectral correction factor will be

$$ {\varepsilon}_{cm}=\frac{1}{N}{\displaystyle \sum_{i=1}^N}{\left[{M}_v\left({\theta}_i\right)-{E}_v\left({\theta}_i\right)\right]}^2 $$
(47)
$$ {\varepsilon}_{cm}=\frac{1}{N}{\displaystyle \sum_{i=1}^N}{\left[\frac{1}{\sigma\ }{\tilde{M}}_v\left({\theta}_i\right)-{E}_v\left({\theta}_i\right)\right]}^2 $$
(48)

Minimizing the mean square error MSE, the optimum spectral correction factor is obtained, which in our case is σ = 0.17813. Figure 6 shows the fit between the experimental and theoretical values with the optimal σ. If we assume both a 5 % error in inclination and also in illuminance measures, every experimental dot stays inside the error bands depicted in the figure.

Fig. 6
figure 6

Illuminance depending on the inclination: optimal values

The obtained spectral correction value is close to the range corresponding to type F luminaires (0.102 ≤ σ ≤ 0.156). The difference may be explained, in addition to experimental error, for the incidence on the sensor of light reflected by the room (not directly from the luminaire) that obviously alters the spectrum of light and therefore the spectral correction factor.

To determine the value of the directional correction factor ψ the expression (25) is applied, where the value of M v (θ) is given by (33), and the value of M va (θ) is derived substituting (32) in (27),

$$ {M}_{va}\left(\theta \right)=\frac{1}{\sigma\ {K}_v\ {\phi}^2}{\displaystyle {\int}_0^{\frac{\pi }{2}}\left[{\displaystyle {\int}_{-\pi}^{+\pi }{I}_P\left(\alpha, \varepsilon \right)S\left(\beta \right)d\alpha^{\prime }}\right]d\varepsilon^{\prime }} $$
(49)

expression in which we know every value. Figure 7 depicts the experimental and theoretical values of the illuminance (emittance) for both the photodiode (M va ) and for the luxmeter (M v ). In the photodiode case a 10 % error band in inclination and also in illuminance measures is depicted.

Fig. 7
figure 7

Experimental and theoretical values of the illuminance (emittance) for both the photodiode and for the luxmeter

The quotient between the above two curves (25) is just the directional correction factor ψ. As expected, its value depends on the orientation (θ) as it is shown in Fig. 8.

Fig. 8
figure 8

Directional correction factor

For the final directional correction of the illuminance measurement the real inclination of the sensor has to be considered. The spectrally and directionally corrected photodiode measurements are depicted with red dots in Fig. 9. With blue dots are represented the measurement obtained by a calibrated luxmeter.

Fig. 9
figure 9

Photodiode and luxmeter measurement

As can be seen the error in the photodiode measurement also depends on inclination. In Fig. 10 this error is shown. The maximum error for any inclination is 65 %. For inclinations covering direct (or almost direct) lights, the errors are about 20 %. And for the indirect lights areas the errors are below 10 %. These results are similar to the 57 % maximum error reported for an RGB photosensor [10], and much better results than the 152 % error stated for a three-color sensors equipped device [24]. The proposed method equals or supersedes previously reported research while using less complex (and cheaper) sensors.

Fig. 10
figure 10

Illuminance measurement error

Although having a 65 % maximum error in the illuminance is not a good figure for laboratory measurements, it could be acceptable for applications where the goal is to control the light subjective perception, like in stage lighting. In fact, the relationship between illuminance level and brightness (subjective perception) is logarithmic [37], following Fechner’s Law [11]. The error of the proposed method in brightness terms is about a 10 % or below for most inclinations, and never higher than 20 % (Fig. 11).

Fig. 11
figure 11

Histogram of brightness error

6 Conclusions

From the theoretical and experimental work described in the previous sections, it follows that it is possible to use a simple (and cheap) sensor (photodiode) as a light meter capable of providing quite accurate measurements of the illuminance and brightness. Low cost sensors, as the one proposed, require spectral and directional corrections which have been derived throughout the paper. The error analysis has demonstrated that our approach equals or supersedes results obtained in previous research, despite the fact that we use much simpler sensors.

This affordable approach permits the integration of a high number of light sensors in a more general WMSN. It has been shown that its use is possible in stage light monitoring building up Wireless Multimedia Sensor Networks. The use of this technology in theaters and filmmaking industry is proposed, integrating many spots light measurements with other multimedia information (video, images, audio and other scalar data).