Introduction

Teaching about sensors requires references to analytical definitions, and a discussion of sensor principles combined with detection principles, the process of sensor interaction with the analyte, and data treatment. Some aspects of teaching analytical terms have been discussed recently [1]. Formerly, scientists developing and applying sensors did not use terms of fundamental analytics as defined in the Compendium of Analytical Nomenclature (“Orange Book”), but defined new terms for sensor properties [2]. However, in recent years even sensor journals have been increasingly asking authors to use the correct analytical terms for the characterization of sensor properties. Accordingly, modern sensor teaching must provide the correct definitions for the limit of detection (LOD), the limit of quantification (LOQ), sensitivity, selectivity, and reproducibility. In this context, special note should be taken of the frequent misuse of the term “sensitivity”—this term defines the slope of the calibration curve with unit signal/concentration. In the case of biosensing, especially the calculation of confidence intervals and the determination of LODs depend on correct use of analytical terms. Furthermore, the terms “detectivity,” “sensitivity,” and “limit of detection” are sometimes used interchangeably.

Accordingly, any course teaching sensors must either refer to lectures in analytical chemistry or take the time to introduce students to the fundamentals of analytical chemistry and statistics. Sensors can be considered a “hyphenated technique,” since they combine separation (where in contrast to chromatography the polymer gives just one theoretical plate or the recognition element offers specific interaction in contrast to nonspecific interaction) with detection. Thus, all analytical basics used in quantities of analytical chemistry, quality assurance, and chemometrics [3] must be dealt with in lectures teaching sensors. In addition, transport processes, fluidics, molecular interaction equilibria and dynamics, and detection principles (ranging from calorimetric, mass dependent, electronic, and electrochemical to optical) have to be discussed with respect to the analytical problem. This broad field certainly has to rely on topics taught also in other areas of chemistry and physics. Nevertheless, a sensor course has to bring all these topics together and give students an insight into the interdisciplinary context. Therefore, in this article based on optical sensors, most of these aspects are covered.

Sensor principles

Sensor systems contain the transducer with an electronic readout, data evaluation, sample compartment, and in addition to a physical sensor, a layer on the transducer that is responsible for selectivity (see Fig. 1). Sensors are complementary with classical analytical instrumentation. They are especially applied in process control or in the monitoring of processes where the analyte concentration changes rather quickly. The quality and the properties of sensors depend on the detection system, on the sensitive layer, and on the fluidics. A large variety of electronic, electrochemical, and optical detection principles are known, and have been discussed in detail [4]. For chemical sensors, a sensitive layer acts as a chromatographic system with just a single theoretical plate. Accordingly, selectivity is rather low when separation of analytes is performed only on the basis of the distribution coefficient. To overcome the poor selectivity, sensor arrays have to be applied with use of chemometrics with model-based and model-free multicomponent analysis. In the case of biosensors, selectivity is achieved by the choice of the recognition element based on the biomolecular interaction process.

Fig. 1
figure 1

Sensor principle: the sensor system contains control and evaluation, transducer, receptor with recognition and fluidics

Figure 2 shows the discrimination between different analytes. The scope of selectivity is given with respect to the variation in the sensitive layer from a semiconductor, a polymer layer, a simple functionalized polymer, or molecularly imprinted polymers to a variety of biochemical recognition layers. This receptor layer is called the “sensitive layer.” Figure 2 shows the increase/decrease of stability, reversibility, selectivity, and the LOD for a variety of sensitive layers, where high sensitivity usually implies a rather low LOD. Polymers are rather inert and have short response times. The number of interaction sites influences the increase/decrease in sensitivity of the sensor. For this reason, in the case of polymer sensors, the volume is of interest. Recently, in the case of molecularly imprinted polymers, the layer is formed by beads, where the surface is coated with molecular imprints to increase the interaction. In the case of biosensors, as many recognition elements as possible should form the sensitive layer.

Fig. 2
figure 2

The sensitive layer influences the stability, reversibility, sensitivity, and selectivity of a sensor system. Low selective layers require additional statistical treatment (e.g., use of neuronal networks). MIP molecularly imprinted polymer

Table 1 lists a variety of detection principles [4,5,6]. For monitoring of ambient air, simple semiconductors or photodiodes can be used. In general, the detection principle is not determined by the applications. Thus, competitive devices based on different specific detection methods have been published for many analytes. An example for necessary selection of a detection principle is the application of remote sensing of an explosion-exposed area. Here, use of an optical sensor in combination with fiber optics might be the method of choice.

Table 1 Some detection principles used in chemosensors and biosensors

In this article the focus is on optical detection. The examples of measurement results are given for the detection method of reflectometric interference spectroscopy, which is based on white-light interference of visible radiation reflected on both interfaces of a layer. The shift of the resulting interference spectrum is caused by changes of the optical thickness of this layer (refractive index × physical thickness) either by sorption of molecules with accompanied swelling of a polymer or by affinity reaction at one of the interfaces of the layer [7].

Chemical sensors

In the case of chemical sensors, the selectivity is usually rather low. Functionalized polymers show a variation of signal that depends on the polarity or other properties of the analyte. Using reflectometric interference spectroscopy, one can monitor different swelling by various concentrations and gaseous or liquid analytes. In Fig. 3, the calibration of some hydrocarbons for four different gases is given. It demonstrates the sharp confidence interval for exact reproducible measurements, where error bars can be seen only if one zooms around measurement points.

Fig. 3
figure 3

Calibration of some hydrocarbons for functionalized polydimethylsiloxane as the sensitive layer. DCE dichloroethene, DCM dichloromethane, TCE tetrachloroethene, TOL toluene

These calibration measurements have to be done by random calibration. Thus, the influence of memory effects can be seen and avoided. Such a necessary analytical procedure is demonstrated in Fig. 4. The difference in the response time of the polymer to different analytes allows the determination of even different analytes with a single polymer layer [8]. These different response times are given in Fig. 5. Cluster analysis partial least squares and neural networks have to be used [3, 9,10,11]. The selection of chiral polymers to measure the ratio of enantiomers is of interest [12].

Fig. 4
figure 4

Random calibration for mixtures of three analytes. The amounts of the analytes are given on the graph. DCE dichloroethene, DCM dichloromethane, TCE tetrachloroethene, TOL toluene

Fig. 5
figure 5

For three sensitive layers of hyperbranched polymers the response times are given for two analytes. These data can be used for chemometric evaluation

For some years biomimetic sensitive layers are used. They are more inert than biomolecular recognition layers and somewhat more selective. Many articles have been published introducing a large variety of molecularly imprinted polymers. The “analyte” is copolymerized and afterward eluted. The “cavity” produced interacts more or less selectively with the sample. The disadvantage is that an inflexible polymer matrix increases the response time drastically, and flexible polymers lose their memory for the imprinted analyte. For this reason, the polymerization processes are changed to emulsion control forming nanobeads which combine good imprinting with good accessibility [13].

Biosensors

In biochemical sensors, an additional layer is added between the transducer and the recognition elements to reduce nonspecific interaction. This layer is especially essential in the case of direct optical measurements [14]. Nonspecific interaction is a special problem in matrices such as blood, milk, or wastewater. This modified layer system is given in Fig. 6. The quality of biosensors depends on the recognition elements. Some frequently used recognition elements are listed in Table 2. For a biosensor, some different types of assays can be discussed [15, 16]. Direct assays allow the measurement of very large analytes such as C-reactive protein (CRP). The recognition element is immobilized on the shielding layer, and no additional reagent is necessary. Direct detection of CRP is possible by immobilization of CRP antibodies on the surface and detection of the large CRP as an inflammation marker in blood [17]. Direct assays are preferable, but can be realized only for a few biomarkers. Another application is the detection of antibodies against Salmonella [18].

Fig. 6
figure 6

Layer system for a biosensor including the shielding layer

Table 2 Examples of recognition elements

Frequently, sandwich assays are used. One recognition element is immobilized on the layer. It selects the analyte. A second recognition element forms the sandwich and increases selectivity, since the analyte has to fit to two recognition sites. However, small analytes cannot provide two different interaction sites for recognition elements for the sandwich assays. The so-called binding inhibition assay allows one to determine even small analyte molecules.

Considering this binding inhibition assay, there are three areas (as shown in Fig. 7). The first has an equilibrium between the analyte and one recognition element; the second is the transport-limited area, and the third is the area at the transducer surface where a derivative of the analyte is immobilized on top of the shielding layer. The binding inhibition assay is usually combined with a preincubation phase and a flow injection analysis system. After an interaction lasting 5 min, antibodies and analyte molecules have established an equilibrium such that antibodies are partly blocked. The next steps (transport, affinity reaction at the surface) depend on the amount of nonblocked antibodies present, on the diffusion constants for diffusion of these free antibodies to the surface, on the loading of the surface by immobilized analyte derivatives, and on the rate constants for the second equilibrium at the surface. In principle, the result is binding curves with association and disassociation parts as shown in Fig. 8 (top right). In Fig. 8, these binding curves are provided for four different analyte concentrations in the sample. The form of the binding curve depends on the loading of immobilized analyte derivatives at the surface, on the process of diffusion of nonblocked antibodies to the surface, and on the rate constants of the binding at the surface.

Fig. 7
figure 7

Processes during a binding inhibition assay

Fig. 8
figure 8

Determination of rate constants of interaction processes (adsorption/association, desorption/dissociation)

Overall, the total process forms a consecutive reaction where the rate-determining step is given by either a very slow diffusion process or the kinetics of the interaction at the transducer surface. Poor loading of analyte derivatives at the surface causes each transported nonblocked antibody to interact; thus the transport process is the rate-determining step, which results in rather linear slopes in the binding curve over a long time. If, on the other hand, the loading is rather high, this will result in typical binding curves as shown in Fig. 8, and the kinetics of interaction determine the rate.

These curves can be evaluated in two different ways to obtain association and disassociation rate constants:

  1. 1.

    The first approach uses the three diagrams in Fig. 8, where four concentrations are measured at many times. The slopes of the binding curves at many times are taken and are graphed in the second diagram (diagram on the left of Fig. 8) to form more or less linear lines for each concentration. The slope of these lines is taken and forms a straight line in the third diagram (diagram at the bottom of Fig. 8), which provides the association rate constant as the slope. Unfortunately, the abscissa representing the dissociation rate constant cannot be determined very well by these means. The dissociation rate constant has to be calculated from the first-order disassociation rate equation according to the data obtained in the diagram at the top right of Fig. 8. In this figure, Γ is the quantitative result of the interaction process measured at the surface. This amount changes over time according to the observed rate constant k s and the amount of antibodies at the surface in equilibrium. The maximum possible equilibrium is given by Γ max.

  2. 2.

    The second approach is curve fitting to the binding curve by Eq. 1:

$$ R(t)=A\left(1-{\mathrm{e}}^{-{k}_{\mathrm{s}}\left(t-{t}_0\right)}\right), $$
(1)

where R(t) is the signal measured with time. Curve-fitting results and measured values are compared until there is no systematic difference over the entire fitting range. One can optimize the result by changing the area of the curve to be fitted and the range of the fit [19].

A further discussion of the biomolecular interaction analysis with limits of and comments regarding black box programs can found in [18, 20,21,22].

Graphing of the interaction signal measurement for immune reactions versus the logarithm of concentration results in a typical sigmoid curve, which is given for the measurement of the nonsteroidal anti-inflammatory drug diclofenac in Fig. 9 together with the recovery rates. Both diagrams include the error bars according to normal analytical treatment. The European Union limits the amount of nonsteroidal anti-inflammatory drug to 1 μg/kg [23].

Fig. 9
figure 9

Measurement of diclofenac in milk: (calibration with confidence interval and error bars (left); recovery rates within the range 70–120% (right)

For linear calibration curves, the limits are well defined, and also have to be used for sensor measurements (many polymer sensors show linear calibration; however, there can be nonlinear interaction with mixing enthalpies thermodynamically speaking). The LOD is defined according to IUPAC as the minimal detectable value (L d) [2] as the mean blank value plus three times the standard deviation and the LOQ (minimal quantifiable value) is defined as the mean blank value plus ten times the standard deviation. However, to be correct, one has to discriminate the signal axis (Y) and the concentration axis (X). The blank gives (from a number of measurements) a distribution curve (α = 5%) for a limit of decision on the Y-axis. Using the calibration curve, one finds with the confidence interval a second distribution function (β = 50%) for measurements containing the sample on the Y-axis. Both overlap within a statistically defined limit (β = 50%, type 2 error) to give the lowest value on the calibration curve with the minimal detectable value (LOD) by projection to the X-axis in concentration. A better approach is an overlap of both distribution curves with only α = 5% and β = 5% on the Y-axis. Projection by use of the calibration curve results in another detection limit (detection capability, critical value of the net state variable, six times the standard deviation) [24,25,26]. If the confidence interval is also taken into account, projection gives rise to a higher concentration LOQ at the X-axis. The inverse of the calibration function with the wider confidence interval has to be taken to determine concentrations from signals. These considerations are demonstrated in Fig. 10. The context explains that a higher number of blank measurements reduces the limit values. The confidence interval for the calibration is graphed. However, the inverse of the calibration function (analysis function) has to be used, which results in a broader confidence interval.

Fig. 10
figure 10

Signal and concentration axis with calibration curve and confidence interval giving the limit of detection (LOD) and limit of quantification (LOQ)

In the case of nonlinear calibration, the determination of limits is more difficult. It is demonstrated in Fig. 11. Normally, the minimum detectable concentration (green line cutting the calibration curve) and the reliable detection limit (green line cutting the confidence curve) are used, both at higher concentrations, as the LOD (three times the standard deviation). The minimum detectable concentration and the reliable detection limit are calculated for 95% probability. The working range defines the range in which a signal allows quantification of the concentration of an analyte. Further discussions and variations from the IUPAC definitions have been published [16, 27, 28].

Fig. 11
figure 11

Sigmoid calibration in the case of immunoreactions with the minimum detectable concentration (MDC) and the reliable detection limit (RDL). LOD limit of detection

Another typical application of a biosensor is the combined measurement of CRP and anti-Salmonella antibodies in animal samples to allow parallel detection of Salmonella infections and of the status of the infections by quantifying CRP [29]. Two different assay types are used in parallel on one optical platform. The measurement of real samples is of interest. Recently, applications in various fields have been reviewed: sensors in in-line sensor monitoring in bioprocesses [30]; the concept of and first results for nanosensors for neurotransmitters [31]; and a noninvasive method for cancer diagnostics by detection of volatile organic compounds in exhaled breath, demonstrating the future prospects of sensors [32]. Recently, sensors have proven their capabilities even in effect-directed analysis [33] and imaging [34].

Conclusion

Teaching about sensors requires teaching the fundamentals of analytics and careful use of the definitions given. Special care regarding the calculation of LODs has been taken for the nonlinear (sigmoidal) calibration curves in the case of biosensors. Sensitivity is the slope of the calibration curve. Evaluation of sensor arrays requires multicomponent analysis.

The field of sensors is interdisciplinary, and combines detection principles, interaction processes, and chemometrics. Especially, biosensors are being used increasingly in environmental monitoring, food control, pharmaceutical screening, process control, biotechnology, and homeland security. Sensors are usually applied directly to the sample without sample preparation—this makes them useful in complex matrices such as wastewater, foot creams, and blood. Both measurement without sample preparation and complex matrices make measurements difficult. However, special tailoring of the surfaces allows one to obtain data that have the quality necessary for analytical and statistical treatment. These matrices make evaluation more difficult.

The same quality of data evaluation is expected as in analytics. Requirements of quality management become obvious when sensors are used in point-of-care instrumentation, where the high standard of analytics is required. This demonstrates that sensors are just a type of instrumentation in analytics.