Model-based vibration control for optical lenses

This work presents a contribution to the active image stabilization of optical systems, involving model development, control design, and the hardware setup. A laboratory experiment is built, which demonstrates the vibration sensitivity of a mechanical-optical system. In order to stabilize the undesired image motion actively, a model-based compensation of the image vibration is developed, realized and tested. Beside a linear actuator motion system, a force sensor system and a position sensor system are introduced and analyzed. In particular, various low-cost hardware components of the Arduino platform are used, which support the deployment of the controller software based on Matlab-Simulink. The remaining image motion is measured with a high-speed vision sensor system and the performance of the overall system is assessed.


Introduction and modeling
Many optical instruments are sensitive with respect to mechanical vibrations and dynamic disturbances since they have to meet strong requirements in sometimes rough environments. For instance, such an optical system can be a lithographic objective for the exposure of wafers, a ground-based or space-based telescope, or an optical metrology instrument. Figure 1 sketches some examples of mechanical models, whereby the elasticity of bodies can also be taken into account, depending on the intended investigation. By means of an optical system, an object is projected to the image plane and the light passes through optical elements like lenses or mirrors. However, during the operation time, even small mechanical disturbances and displacements of the optical elements can be sufficient to produce unacceptably aberrated, disturbed images. In order to reduce such effects and to improve B P. Eberhard peter.eberhard@itm.uni-stuttgart.de 1

Previous test setup
In a previous work according to [2,8], an experimental setup has been planned and built, which demonstrates passive suppressing of optical aberrations, i.e., an image stabilization, by means of a structural optimization changing masses and stiffnesses. The setup is roughly derived from a lithographic objective with six projection lenses, whereby the lateral dynamical behavior of the six lenses, the six mountings and the joint housing is represented by a rigid multibody system according to [9] using 13 generalized coordinates q(t). Thus, the out-of-plane displacement of lenses within an objective due to lateral dynamical disturbances is regarded, similar to Fig. 1(b). The rotational components and the longitudinal displacements of the lenses are not considered in this experiment, since they usually have a smaller optical impact than the lateral displacements.
The main structure of the realized setup is illustrated in Fig. 2. On the left-hand side, the object is represented by a transparent circle with a radius of 5 mm and with a light source behind, so that this light point is propagating through the glass lenses onto the image plane on the right-hand side. There, a paper screen is used to make the image of the light point visible, with a radius of approximately 2.5 mm. The image motion a(t) is caused Fig. 2 Principle of the measurement setup: the coupled lenses with peripheral measurement devices, according to [2] by the lateral oscillations of the lenses in one direction due to a hammer impulse, which imitates a disturbance. In the case of small motion, the equations of motion are linear as is the relationship between the displacement of the lenses and the resulting optical aberrations. This relationship can be ascertained by experiment or simulation and one can express it by the kinematic-optical sensitivities collected in the matrix C a , as proposed in [10]. The image displacement is here the only regarded optical aberration, hence, C a is a vector and the system output is a(t) = C a · q(t). (1) In order to simplify the construction, the six lenses are supported with k-indexed pendulae, which are coupled via springs to intermediate masses and the housing. Hence, the mechanical system has 13 rotational degrees of freedom (DOF). Since the springs with the different stiffnesses κ Lk and κ Mk are exchangeable, and the moments of inertia I k are modifiable, it is possible to create different test systems, e.g., with low frequencies and high amplitudes. Anyway, the equations of motion result with the structure with the inertia-depending mass matrix M, the damping matrix D, the spring-depending stiffness matrix K, and the mechanical input matrixB. The related force of the impact hammer, which acts as the system input u F (t), is measured as during an usual experimental modal analysis. To identify the image motion resulting as a system output, a high speed camera with a Camera Link interface is used in combination with a frame grabber card. The latter is connected to a computer and triggered via the modal analyzer. After the recording of the frames, the image processing for the motion identification of the light point is done.
With the knowledge of the force input and image motion output of the mechanical-optical system, the dynamical-optical transfer function can be experimentally identified.
Since the dynamical behavior of the mechanical system can be adjusted via the exchangeable springs and modifiable moments of inertia, an optimization can be performed with the 18 design parameters given by κ Lk , κ Mk and I k , see [8]. The actual realization of the setup with horizontal lens motions is shown in Fig. 3. Despite of the pendulum design and the use of translational springs according to Fig. 3(a), the mechanical system can be considered to behave linear for small excitations and displacements, as investigated in [11]. Most of the construction parts shown in Fig. 3(b) are designed by means of computer-aided design, realized with acrylic glass and manufactured with a laser cutting machine. This initial test setup was used to demonstrate the influence of a structural optimization based on a dynamical-optical simulation model [2,8], but will be extended as follows.

Active control extension
The purpose of the extended setup is, to realize and demonstrate an actively controlled image stabilization experiment based on a dynamical-optical simulation model, an actuator, and different sensors. Therefore in particular, the 5th lens according to Fig. 4 is mounted on a linear actuator stage in order to compensate the optical aberrations actively. Since this lens has a high kinematic-optical sensitivity, the correcting motion is small and fast [11]. Furthermore, the position of the 4th lens is measured to allow a model-based estimation of the light point motion. In the chosen stiffness configuration, this lens is strongly coupled to the other bodies, so its motion is quite large and suitable to sense [12]. Additionally, the hammer impact is detected and both signals are later used as inputs for the Kalman-Bucy filter.
At first, the changed mechanical multibody system without the 5th lens and its intermediate mass is derived and identified in the form of the equations of motion (2) with 11 DOF. For the stiffnesses and moments of inertia the initial design parameters according to [8] are chosen. Figure 5 compares the image motion calculated by the simulation model with the results of the vision sensor system. For this, the Rayleigh damping parameters α = 0.03 s −1 and β = 3 · 10 −4 s are estimated, which leads to the damping matrix D = αM + βK. Furthermore, the controller is deactivated and a hammer shock is applied as later studied in Fig. 11. The results in Fig. 5 indicate that the simulation model meets the real system behavior quite well, especially shortly after the impact and in terms of the frequencies. However,  the simulation-based result diverges after the first few seconds, since the linear model of the free-running pendulae is only a simplified approximation of the real behavior, e.g., since the bearing friction is neglected.
According to Fig. 4 and in contrast to the previous setup, the control setup looks quite complicated, although the image stabilization strategy seems to be straightforward. Actually, the force-induced image error, denoted as displacement a F , has to be negatively superposed with the image error a M caused by the shift of the compensation lens. In order to extinguish the total imaging error a according to Eq. (1), it yields With the kinematic-optical sensitivity s L5 of this compensation lens, obtained by experiments [8] and simulations [11], the required position of the related actuator results This finding can be used to estimate the requirements for the actuator system, as investigated in [11]. The required travel range is approximately 6 mm and the potential velocity should be at least 100 mm/s. However, an actuator and vision sensor behave time-delayed due to the subordinate motor control, the intrinsic latency of the camera, the image acquisition, and the expensive image processing. According to empirical investigations of the overall dynamical system, the vision sensor system requires a maximum sample time of 5 ms and real-time capability in order to provide a stabilizing feedback. Therefore, an appropriate industrial machine vision system with a high performance processor should be used, as explained in [13]. By means of the low-cost vision system Basler ace-acA800-200gm used in this work for educational purpose, it is not possible to fulfill the real-time and sample time requirements. As a result, the vision sensing system is only used for the offline assessment of the image stabilization task. Nevertheless, in order to realize the dynamical-optical control based on feedback, the following approach is chosen.
A real-time capable control unit system Arduino Due with a sample time of 2-8 ms is used in combination with a position sensor at the 4th lens. The related model-based controller takes the signal u F of the disturbance and the signal y L4 of the position sensor subsystem as inputs, estimates the expected image aberration by means of a state observer and commands the actuator subsystem with the output signal u M accordingly. The corresponding subsystems are indicated with labeled boxes in Fig. 4.

Arrangement of the hardware systems
The hardware setup of the final system is shown in Fig. 6 and discussed within this section. At the lower left, the linear actuator stage is illustrated, which mounts the compensation lens. This motor is driven and positioned by the related motor controller. It steers the stage according to a given analog input, which represents the commanded trajectory given from the main control unit system. The latter is realized by means of the Arduino Due. In order to provide a lens position feedback for the dynamical-optical control, a Hall-effect sensor is utilized, which is monitored by an Arduino Uno. For the transmission of the position data to the Arduino Due, a digital-analog converter is used, which provides the information by means of an analog signal.
At the upper left, the devices for sensing the force are depicted. Since the modal hammer has a force sensor included, which is based on the integrated circuit piezoelectric (ICP) technology, it needs to be supplied by an ICP conditioner. This device supplies the sensor and further amplifies the small force signal to a recognizable analog signal. In order to prevent the occurrence of high voltages, a protection shield is interposed and in addition, a comparator shield is used to trigger the feedforward control within the Arduino Due. During the deployment of the software on the microcontrollers, they have to be connected to the notebook. Also for the configuration of the motor controller, it must be plugged to the notebook. Finally, for offline verification purposes the Basler GigE camera can be connected to the notebook, which further can perform an image processing. As a consequence, the notebook is only used for the post-processing of the camera images and for the uploading of the controller software.

Lens actuator system
The actuator system is realized with the V-528.1 PIMag Linear Stage [14] and the related C-413.20A PIMag Motion Controller [15]. This motor meets the requirements formulated in Sect. 1.2, since it has suitable dynamical properties like a nominal force of 2.3 N, a maximal velocity of 250 mm/s and a travel range of 20 mm at a high precision. The compensation lens is mounted onto the motor stage by means of a stiff and lightweight construction, as shown in Fig. 12. It consists of acrylic glass and aligns the 5th lens with the other lenses on the optical axis for the initial position y M = 0 mm. In order to power, to position, and to move the stage, an appropriate motor controller has to be used. In particular, the chosen type of device can also handle analog inputs and can report information by means of analog outputs. Thus, it can control the stage position according to a commanded position u M and the actual value y M of the position encoder can be monitored.
For the configuration of the related parameters as well as for basic tests, the motion controller can be connected to a computer via USB. The supplemented PIMikroMove software allows not only to adjust the trajectory tracking control based on a PID cascade, but also to change the filter settings. On the one hand, this can reduce the noise of the incoming signal, and, on the other hand, it can introduce an undesirable phase shift causing a time lag. Thanks to a dynamic link library, which is also provided by the manufacturer, the configuration and quasi-static actuator position can be adjusted using Matlab or LabView.
Furthermore, the integrated Profile Generator is enabled by default, which plans a smooth trajectory according to the given input signal. However, for a fast response behavior this should be deactivated, since it results in a significant delay between the analog input signal of commanded position and the actual position of the stage. As a result, the trajectory tracking control is mainly determined by the servo PID-control parameters and low-pass filter settings, which are tuned empirically.
Next, the behavior of the closed-loop lens actuator system is identified for further simulations. In order to analyze the noisy analog signals recorded by an oscilloscope, a 5th-order Butterworth filter with a cutoff frequency of 100 Hz is applied. The resulting data is used to estimate a simplified closed-loop transfer function of the actuator system by means the instrumental variable (IV) method [16]. This method is implemented in the tfest function of the System Identification Toolbox in Matlab.
The step response of the estimated transfer function with rounded constants is illustrated in Fig. 7 in comparison to the measurement y M . It is obvious, that the main behavior is well approximated, despite of the rough estimation with only three poles and three zeros, as represented in Fig. 8. The initial direction of the simulated step response followed by a direction reversal is due to the positive zeros of the transfer function [17]. Since the real lens actuator system in the closed-loop mode behaves actually nonlinear, the estimated step response is only representative for steps of around 1 mm, as regarded. In addition, higher steps are investigated during the system identification process. However, it turns out during the later dynamical-optical control, that the motor behavior is quite well represented by means of the considered transfer function scaled by a factor of 0.97, as plotted in Fig. 9. Broadly speaking, the time difference between the signals u M (t) and y M (t) for the target travel profile is about 8 ms.

Force sensor system
In order to excite the mechanical system with a detectable force, a modal hammer is utilized. It is installed in a pendulum mechanism according to Fig. 10(a) for the generation of a reproducible impulse. The related ICP conditioner according to [18] is illustrated in Fig. 10(b). It not only powers the corresponding force sensor at the hammer tip, but it also provides a signal proportional to the force and a further amplification can be applied, e.g., by a factor of 100. However, if the hammer shock is accidentally strong, too strong voltages can cause damage at the sensitive microcontroller input. In order to limit the voltage signal, a protection shield based on a Zener diode can be realized. Fig. 11 The measured hammer force F h is plotted against time during an impulse excitation. In addition, the approximated function F h,approx is shown and F h,eq t indicates the area of an equivalent force impact, estimated by means of the energy equilibrium Since the sensor sensitivity in unit mV/N is specified by the manufacturer, the actual force distribution F h (t) can be translated by means of the measured voltage, as shown in Fig. 11. In addition, the force distribution can be approximated by the analytical function with the parameters p 1 = 0.0032 s, p 2 = 2.065, and p 3 = 48.55 N. This simplifies the numerical simulation and prevents the use of a lookup-table.
The magnitude of excitation does not change due to the determined mechanism. Therefore, the computational effort for the later used control unit can be reduced, since the change of momentum can also be replaced by a constant equivalent force of 19 N acting for a sample time of t = 4 ms, which is also indicated in Fig. 11. Consequently, the control unit only needs to get a trigger signal immediately when the hammer is hitting the main body. Therefore, one can use a comparator circuit consisting of an operation amplifier and several resistors [19].

Position sensor system
A position sensor is utilized to observe the behavior of the mechanical system and to estimate the image displacement by means of a Kalman-Bucy filter. Therefore, the low-cost AS5306 magnetic incremental linear position sensor according to [20], which is based on the Hall-effect, is installed at the 4th pendulum, see [21]. In combination with the Magnetic Multipole Strip MS12-15, described in [22] and shown in Figs. 12(a) and 12(b), a wide range of displacements can be detected. Each pole of the corresponding magnet strip has a length of 1.2 mm, and since they are next to each other, the vertical component of the magnetic field B pk is changing with respect to the axial location. The considered Hall-effect sensor transforms this change to a digital signal with four connectors, two for the power supply and the other two provide a quadrature incremental output. This output consists of the quadrature signal A and the 90 • -shifted signal B, which further allows to identify the travel direction. Figure 12(c) indicates an exemplary measurement nearby a pole pair.
Thanks to an included interpolation circuit, the length of a pole pair is divided into 160 positions and it is further decoded into 40 quadrature pulses, see also [20]. As a result, the sensor resolution yields approximately 15 µm. The magnetic strip is slightly bended and glued to the bottom of the 4th pendulum. Furthermore, the Hall-effect sensor is fixed to the ground, as shown in Fig. 12(d). Hence, the constant gap between the magnet and the chip of about 0.5 mm can be ensured.
In order to obtain the actual measured position, an Arduino Uno microcontroller is used, which runs an interrupt-based quadrature decoding algorithm. Due to a production-related deviation of the pole length and the slight bending of the magnetic strip, the default step width of 15 µm of the sensor leads to inaccurate results. Using the linear actuator system, the accuracy is measured and the step width is empirically adjusted to 13.6 µm.
In order to transfer the lens position in a range of [−8, 8] mm to the control unit, an external 10 bit digital-analog converter (DAC) is utilized. It is connected via a serial peripheral interface (SPI) to the Arduino Uno and is realized with a Microchip Technology MCP4911 according to [23]. The analog output is determined in the range of [0, 3.3] V, which is suitable for the analog input of the Arduino Due.

Control unit system
The main control task is performed by the Arduino Due board. It not only provides multiple analog inputs but also two analog outputs, all of them with 12 bit resolution. With the clock speed of 84 MHz and flash memory of 512 KB, it is a powerful Arduino board that is compatible with the Simulink Support Package for Arduino Hardware, see also [24]. Thus, a control algorithm can be graphically developed by means of Simulink. As a result, it is well suited to visualize the structure of a control strategy and to demonstrate the change of parameters, for instance, during a student training in the laboratory. Based on the designed model, the Simulink Support Package is generating C-code, performs a compilation appropriately to the processor, and deploys the result to the USB-connected board. It is even possible to integrate custom Arduino libraries in Simulink via the S-Function Block Builder, The final hardware box is pictured in Fig. 13, which contains not only the control unit board, but also the introduced shields and devices of the other subsystems. Furthermore a switch, a button, a rotary potentiometer and a display are installed for the activation of the controller, the reset of the systems, the adjustment of the measurement magnitude and information about the current control algorithm.

Vision sensor system
Next, the vision sensor system is described. It utilizes a Basler ace-acA800-200gm camera according to [26] in combination with a FL-CC1614A-2M lens with a focal length of 16 mm. The camera is connected to a common notebook via Ethernet. On the one hand, it can be powered over Ethernet (PoE), whereby a special Ethernet port has to be available. On the other hand, a separate power supply can be used. At the full resolution of 800 × 600 pixel, the maximum rate of monochrome images is 200 frames/s. This image acquisition is tested by means of the supplemented Pylon Viewer software and in combination with the Pylon GigEVision Driver, see also [27].
This low-cost vision sensor system is only intended for the offline assessment of the dynamical-optical control. Only if it were possible to acquire and process one image within 5 ms, it would be feasible to use the vision feedback directly for the control. Another issue is to communicate the detected position feedback within this time step to the control unit system which commands the actuator. In order to meet such a requirement, the regarded camera should be operated in combination with a high-performance embedded system, which has to be real-time capable [13] or an FPGA based system.
The horizontal light point motion of about ±4 mm can be captured with the full width of the sensor. The image area of interest (AOI) is further adjusted to 800 × 200 pixel. For the vision measurement, the single frames are recorded and stored in the video container format AVI. Afterwards, an image processing can be performed in order to obtain the motion of the light point. The calibration of the vision sensor system is performed in order to get the relation between the measured position in pixel and the actual position in mm. Fortunately, the kinematic-optical sensitivities are accurately known. Thus, the 5th lens can be displaced by means of the actuator and the actual image position can be estimated with a ≈ s L4 y M .

Development of the control strategy
Beside the fundamentals of the related theory and the controller design, the prepared simulation model and the tuning of the controller are described within this section. Finally, the implementation on the hardware is discussed.

Fundamentals
It is the main computational task of the control unit to reconstructed the motion of the light point with the help of a state observer. For this purpose, a static Kalman filter according to [28] is used, which is also called Kalman-Bucy filter or linear quadratic estimator (LQE). It is basically an observer, which optimally estimates unknown states by means of noisy measurements and an approximated dynamical system model. On the one hand, the error of the measurements are described as the observation noise v(t), and, on the other hand, the approximation errors of the model are represented by the process noise w(t). For the derivation of the method it is assumed, that both types can be represented by white Gaussian noise. Hence, for the covariance and expectation value of the stochastic noises, they can be considered at two arbitrary time instants t 1 and t 2 . According to [29], it yields cov w(t 1 cov with the Dirac δ-function. The matrices Q and R represent the constant power spectral density matrices of the process noise w(t) and the observation noise v(t), i.e., they characterize the inaccuracies of the model states and of the measurements. Since the inaccuracies are commonly not known, they are chosen and adjusted as weighted matrices for the regarded application [29]. The linear state-space model can be written aṡ with the system matrix A, the input matrix B, the noise input matrix G, and the output matrix C. In order to solve the estimation problem optimally, an estimator whose outputx(t) converges to the actual state x(t) has to be designed. Therefore, the observer dynamics is formulated asẋ with the constant observer gain matrix L opt . This matrix affects the error-correcting portion and has to be calculated by means of the Riccati equation, see [29]. The filter yields the exact conditional probability estimate in the special case that the errors are normally distributed and zero-mean. This cannot be proven in most cases, but the assumption of Gaussian noise and the resultant Kalman-Bucy filter usually lead to a reasonable performance [29].

Controller design
The actual design of the control strategy is shown in Fig. 14. The main structure is not only based on the Kalman-Bucy Filter (KF) block, but also on a future integration (FI) block and a feed forward (FF) control. First, the system equations of the coupled lenses in analogy to Eqs. (2) and (1) can be formulated in the state-space representation ⎡ ⎣q q Since the complete lens system is mechanically coupled, the observability condition is satisfied for the measurement of a single and arbitrary state. The motion y L4 of the 4th lens due to the excitation u F has a significant influence on the image aberration, so it is chosen for the position measurement, as explained in Sect. 2.3. Hence, the KF according to Eq. (11) results in the state space representation witḣ Model-based vibration control for optical lenses This differential equation can be solved by means of a numerical integration. Furthermore, the image aberration due to the force excitation can be approximately ascertained witĥ whereby C optic contains the kinematical-optical sensitivities with respect to the mechanical DOF of the related state space, specified in [10]. Second, the design is extended by the FI block, as indicated in Fig. 14. It should estimate the system behavior of the future time stepx k+1 by means of the discrete model equationŝ as derived in [30]. As a result, the signal latency due to the position measurement and lens actuator system can be reduced by the use of the future aberrationâ F,k+1 , which is expected after the next sampling period T d . In order to estimate even another step further, one could neglect the change of the input with the assumption u k+1 ≈ u k . This is only valid if the sample time is relatively small compared to the dynamics of the input signal. Hence, it would yieldx Third, the design of the FF control is discussed. Fortunately, the transfer function G M,est (s) of the lens actuator system is identified according to Fig. 7. Hence, the performance of the trajectory tracking control can be improved by means of a preceding system containing the inverse behavior. As a consequence, the original zeros become the new poles, and vice versa. However, they are interchanged to the right-hand side of the pole-zero map and, therefore, an unstable filter would result. In order to solve this issue, an approximated filter based on an all-pass [31] can be designed with The number of poles is denoted with n poles and the gain of the original transfer function has also to be taken into account.

Simulation and tuning
In order to apply and investigate the control strategy of Fig. 14, it is implemented in Simulink. The actuator and pendulae systems are represented by the use of mathematical models. Furthermore, the observer gain matrix L opt within the KF according to Eq. (13) has to be computed. Thereby, the noise input matrix is determined as an identity matrix G = I , i.e., all states are considered to have independent process noise w(t). In addition, the weighting matrices Q and R must be chosen, which characterize the model and measurement inaccuracies. The guess of the latter is based on the performance of the position sensor system according to Sect. 2.3 and is further explained in [10]. As a result, the regarded matrix yields In the confidence that the measurement is quite accurate compared to the just roughly approximated model, the inaccuracy of the later should be higher penalized, which means R . This gives more weight to the measurement and the estimation accuracy than to the model accuracy. The final matrix Q = diag{5 · 10 −8 } 22×22 is determined by means of a parameter tuning study in [12].
The corresponding simulation results of the reconstructed light point position based on the y L4 (t)-measurement are shown within the first three seconds in Fig. 15. Thereby, the KF is simulated in combination with, and without the measured excitation u F (t). In addition, the measurement of the vision sensor and the response of the coupled lens system according to Eq. (12) are presented. As expected, the KF without the input u F initially differs from the measurement and from the other curves, since the hammer excitation is thereby not considered. But already after 0.5 s, the actual light point position is quite well estimated by the KF, no matter if with, or without the excitation knowledge. One can also recognize, that the estimation without the KF drifts away from the measurement after a while.
Furthermore, the target motor positionŷ M,target , which is based on the estimated light point motion according to Eq. (4) can be calculated, as illustrated in Fig. 16. If this signal is directly commanded to the motor, the actual position y M,actual due to the delayed motor behavior occurs. However, according to the controller design in Sect. 3.2, the FI and FF can provide a signal u M,FF+FI with a future guess and a considered motor behavior. As a result, the simulated motor response y M,FF+FI perfectly meets the target position, so it is expected, that the light point motion can be compensated quite well.
If one superposes this motor-generated light point motion with the aberration measured by the camera, the remaining error can be assessed. It is depicted in Fig. 17 and is mainly caused by the estimation error of the KF. Even a time shift of +2 ms or −2 ms between the two signals cannot improve the resulting behavior. This indicates, that it is not possible to compensate the motion of light point completely within this experiment. Anyway, the mean value and standard deviation of the measured impulse response is at least reduced by a factor of almost 4, according to this prediction.
For this simulations, a constant sample time of T d = 2 ms and the ode3-solver according to the Bogacki-Shampine method [32] is used, in order to perform the FF+FI control in correspondence to Eqs. (17) and (16).

Experimental investigation
Now, the developed algorithm has to be implemented on the real-time capable control unit. As soon as this is done, the improvement of the optical compensation due to the realized control can be assessed.

Deployment on hardware
Since the control unit is based on the Arduino Due, it is actually only necessary to extend the developed Simulink model by means of the specific Arduino blocks. Then, one can automatically generate the code for the microcontroller and deploy it on the hardware.
However, due to the limited performance of the Arduino Due, it is not possible to run the complete FF+FI control at the sample time of T s = 2 ms and with the chosen ode3-solver, as simulated in Sect. 3.3. Unfortunately, there is a conflict between the FF control and FI method. On the one hand, the two-step FI with 22 states is computationally expensive so that the sample time has to be increased, and, on the other hand, the numerical integration of the all-pass based FF control becomes unstable for an enlarged time step. Despite of the usage of solvers like the ode2 or the ode1 with lower computational effort, this issue cannot be resolved. As a result, the control strategies are separated, so that either the • FF model with u F and the ode3-solver at T s = 2 ms, or the • FI model without u F and the ode3-solver at T s = 8 ms is running on the hardware. Further deployments with a varied sample time and different solver types are also tested, but are not presented in this work. The Arduino Mega 2560, Arduino Uno and Arduino Leonardo [7] can also not stably realize the designed FF+FI control since they are not powerful enough.
Anyway, the Simulink model of the pure FF model, without the FI part, is realized. The sensor signals can be received by means of a related analog input block and the signal for the motor navigation can be transferred with an analog output block. The digital inputs are used for the force detection and the realization of a switch. For the monitoring of the real-time capability, an pulse is reported via a digital output. Furthermore, the analog signals have to be scaled to their physical units, in order to match the model equations based on SI-units. The final FI model looks quite similar. It includes the discrete model equations instead of the FF and in particular, it does not take the detected force u F into account. The latter is neglected due to the fact, that the KF estimation with, and without the consideration of u F is quite similar shortly after the force excitation. Besides, the force peak takes only 4 ms and the reliable detection by means of the controller with a fixed sample time of 8 ms is quite tough.

Measurement results
The remaining optical aberration a(t), which is represented by the light point displacement, is shown in Fig. 18 for both, the FF model and the FI model. In order to clarify the effect of the control, the uncontrolled impulse response a F (t) is additionally plotted. Obviously, the magnitudes of the motion is sufficiently reduced for both control concepts by a factor of approximately 4. The FF model demonstrates its advantage by the fast reaction directly after the initial force peak and keeps a remarkable compensation behavior. Since the force peak is not considered in the FI model, it has a worse performance at the beginning. But after only one second, it performs at least as good as the other model. Figure 19 visualizes the peak to peak values and the standard deviation of the light point motion in percentage. The latter is computed for the time range t > 1 s, so that the initial behavior is neglected. It is obvious that the FF model reduces both values to about 30%, whereas the peak to peak value during the FI model results at 54% because of the poor behavior at the beginning. The pleasant surprise is the reduction of the standard deviation down to 17% within the FI model for t > 1 s. Thus, the choice of the controller model depends on whether an improved behavior should be present directly after the excitation or for later times.

Conclusions
The successful realization and methodical development of an active image stabilization system was presented in this work. On the one hand, the built laboratory experiment demonstrates the vibration sensitivity of a mechanical-optical system, and, on the other hand, the image vibration was reduced by means of the model-based compensation control.
After the introduction and modeling of the test setup, the active control strategy was explained. Furthermore, the related hardware components and subsystems were arranged and investigated. The development of the control strategy was performed by means of simulations and it is mainly based on a Kalman-Bucy filter. During the deployment of the designed model onto the real-time capable control unit based on components of the Arduino platform, some limitations regarding the sample time became clear. However, the use of the corresponding low-cost controller hardware also leaded to advantages with respect to the convenient development and code-generation in Simulink.
Finally, two different control models for the latency-included system were deployed and tested. The first is based on a feed forward control and the second involves a discrete future integration step. The remaining image motion was measured with a high-speed vision sensor system and a significant reduction of the image vibration was proven. In some cases it was decreased down to 17% of the initial error.
In further studies, the experiment can be extended with additional sensors and actuators, in order to observe and control the system behavior more precisely. By the use of a more powerful microprocessor, e.g., one of the future Arduino versions, or the utilization of a single-board computer, like the Raspberry Pi, the control unit could be capable to run enhanced control strategies, which can further improve the compensation performance. As an alternative, one might move the more complex calculations, like the time integration, on a Raspberry Pi, and use the Arduino mostly for the simple tasks of handling the analog inputs and outputs.