UvA-DARE ( Digital Academic Repository ) How active is active learning : value function method vs an approximation method

In a previous paper Amman and Tucci (2018) compare the two dominant approaches for solving models with optimal experimentation (also called active learning), i.e. the value function and the approximation method. By using the same model and dataset as in Beck and Wieland (2002), they find that the approximation method produces solutions close to those generated by the value function approach and identify some elements of the model specifications which affect the difference between the two solutions. They conclude that differences are small when the effects of learning are limited. However the dataset used in the experiment describes a situation where the controller is dealing with a nonstationary process and there is no penalty on the control. The goal of this paper is to see if their conclusions hold in the more commonly studied case of a controller facing a stationary process and a positive penalty on the control.


Introduction
In recent years there has been a resurgent interest in economics on the subject of optimal or strategic experimentation also referred to as active learning, see e.g. Amman and Tucci (2018), Buera et al. (2011), Savin and Blueschke (2016). 1 There are two prevailing methods for solving this class of models. The first method is based on the value function approach and the second on an approximation method. The former uses dynamic programming for the full problem as used in studies by Prescott (1972), Taylor (1974), Easley and Kiefer (1988), Kiefer (1989), Kiefer and Nyarko (1989), Aghion et. al (1991) and more recently used in the work of Beck and Wieland (2002), Coenen et. al (2005), Levin et. al (2003) and Wieland (2000a;2000b). A nice set of applications on optimal experimentation, using the value function approach, can be found in Willems (2012).
In principle, the value function approach should be the preferred method as it derives the optimal values for de policy variables through Bellman's (1957) dynamic programming. Unfortunately, it suffers from the curse of dimensionality, Bertsekas (1976), and is only applicable to small problems with one or two policy variables. This is caused by the fact that solution space needs to be discretized in such a fashion that it cannot be solved in feasible time. The approximation methods as described in Cosimano (2008) and Cosimano and Gapen (2005a;2005b), Kendrick (1981) and Hansen and Sargent (2007) use approaches, that are applied in the neighborhood of the linear regulator problems. 2 Because of this local nature with respect to the statistics of the model, the method is numerically far more tractable and allows for models of larger dimension. However, the verdict is still out as to how well it performs in terms of approximating the optimal solution derived through the value function. By the way, the approximation method described here, should not be mistaken for a cautious or passive learning method. Here we concentrate only on optimal experimentation -active learning -approaches.
Both solution methods consider dynamic stochastic models in which the control variables can be used not only to guide the system in desired directions but also to improve the accuracy of estimates of parameters in the models. Thus, there is a trade off in which experimentation of the policy variables early in time detracts from reaching current goals, but leads to learning or improved parameter estimates and thus improved performance of the system later in time. Ergo, the dual nature of the control. For this reason, we concentrate in the sections below on the policy function in the initial period. Usually most of the experimentation -active learningis done in the beginning of the time interval, and therefore, the largest difference between results obtained with the two methods may be expected in this period.
Until very recently there was an invisible line dividing researchers using one approach from those using the other. It is only in Amman and Tucci (2018) that the value function approach and the approximation method are used to solve the same problem and their solutions are compared. In that paper the focus is on comparing the policy function results reported in Beck and Wieland (2002), through the value function, to those obtained through approximation methods. Therefore those conclusions apply to a situation where the controller is dealing with a nonstationary process and there is no penalty on the control. The goal of this paper is to see if they hold for the more frequently studied case of a stationary process and a positive penalty on the control. To do so a new value function algorithm has been written, to handle several sets of parameters, and more general formulae for the cost-to-go function of the approximation method are used (Amman and Tucci (2018)). The remainder of the paper is organized as follows. The problem is stated in Section 2. Then the value function approach and the approximation approach are described (Section 3 and 4, respectively). Section 5 contains the experiment results. Finally the main conclusions are summarized (Section 6).

Problem statement
The problem we want to investigate dates back to MacRae (1975) and it closely resembles that used in Beck and Wieland (2002). For this reason it is going to be referred to as MBW model throughout the paper. It is defined as The parameters b 0 , n b 0 , s 2 h and s 2 e are assumed to be known.

Solving the Value Function
The above problem can be solve used dynamic programming. The corresponding Bellman equation is with the restrictions in (2)-(9), dropping b t and v b t for convenance, f (x t ) being the normal distribution and r the discount factor If we use the transform and Furthermore and insert them in (12) we get The integral part of the right hand side of (18) can be numerically approximated on the {y 1 . . . y n } nodes with weights {w 1 . . . w n } using a gauss-hermite quadrature x k the value of x at the node y k and the necessary updating equations We can expand L(x t , u t ) in (2) as follows 3 The computational challenge is to solve (19) numerically. If we set up a grid we can compute an initial guess for V 0 by computing u t that minimizes where the optimal value of the policy variable u CE t is equal to which is the certainty equivalence (CE) solution of the problem in equations defined in (1)-(2). Now we have an initial value V 0 we can solve equation (19) iteratively The value of u t that minimizes the right hand side of (26) can be obtained through a simple line search. The value of V j (x k , u t |b k , v b k ) in (26), can be found by finding the corresponding spot on the grid.

Algorithm
Solving the Bellman equation

Approximating the Value function
In this section we present a short summary of the derivations found in Amman and Tucci (2017). The approximate cost-to-go in the infinite horizon BMW model looks like Equation (29) is identical to equation (5.5) in Tucci et al. (2010), but now the parameters associated with the deterministic component, the y's, are defined as where b 0 is the estimate of the unknown parameter at time 0 and The parameters associated with the cautionary component, the d i , take the form where 5 4 In this case the Riccati equation is scalar function and can easily be solved. The multidimensional case can be more complicated to solve. See Amman and Neudecker (1997). 5 This compares withk bx 1 = 2w 2 (a + bG 1 ) G 1 and where the feedback matrix is defined as G 1 = abw 2 l 1 + b 2 w 2 , in the two-period finite horizon model.
Finally the parameters related to the probing component, the f's, take the form As shown in Amman and Tucci (2017) the new definitions are perfectly consistent with those associated to the two-period finite horizon model.

Experimentation
In this section the infinite horizon control for the MBW model is computed for the value function and approximation method when the system is assumed stationary. Moreover an equal penalty weight is applied to deviations of the state an control from their desired path, assumed zero here. In order to stay as close as possible to the case discussed in Beck andWieland (2002, page 1367) and Amman et al. (2018) the parameters are a = 0.7, g = 0, q = 1, w = 1, l = 1, r = 0.95. Figures (1-4) contain the four typical solutions of the model for b 0 = 0.05, b 0 = 0.4, b 0 = 1.0 and b 0 = 2.0. In this situation both the approximation approach (solid line) and the value function approach (dotted line) suggest a more conservative control than in the nonstationary and no penalty on the control case. The difference between the two approaches tends to be much smaller when the initial state is not too far from the desired path whereas it is approximately the same for x 0 = 5 or x 0 = 5 (compare Figure (1) in Amman et al. (2018) with the top right panel in Figure 2). The reader should keep in mind that the opposite convention is used in Amman and Tucci (2018). By comparing the different cases reported below, it is apparent that the difference between the solutions generated by the two methods depends heavily upon the level of uncertainty about the unknown parameter. Moreover it turns out that the distinction between high uncertainty and extreme uncertainty becomes relevant.

Optimal solution for b0=-0.4, v0=1
Value function approach Approximation approach When there is very little or no uncertainty about the unknown parameter as in Figure 4, a situation where the t -statistics ranges from virtual certainty (top left panel) to 2 (bottom right panel), the two solutions are almost identical as it should be expected. As the level of uncertainty increases, as in Figure 3 and 2, the difference becomes more pronounced and the approximation method is usually less active then the value function approach. Figure 2, with the t ranging from certainty to 0.4, and 3, with t going from certainty to 1, reflect the most common situations. However when there is high uncertainty as in Figure 1, where the t goes from 5 (top left panel) to 0.05 (bottom right panel), the approximation method shows very aggressive solutions when the t -statistics is around 0.1-0.2 and the initial state is far from its desired path. In the extreme cases where the t drops below 0.1, bottom panels of Figure 1, this method finds optimal to perturb the system in the 'opposite' direction in order to learn something about the the unknown parameter. These are cases where the 99 percent confidence intervals for the unknown parameter are (-2.15:2.05), when v 0 = 0.49, and (-3.05:2.95), when v 0 = 1. Alternatively, if the initial state is close to the desired path this method is very conservative.
On the other hand the value function approach seems somehow 'insulated' by the extreme uncertainty surrounding the unknown parameter. As apparent from Figure 1 this optimal control stays more or less constant in the presence of an extremely uncertain parameter. The major consequence seems to be a bigger 'jump' in the control applied when the initial state is around the desired path. Summarizing, a very higher parameter uncertainty results in a more aggressive control when the initial state is in the neighborhood of its desired path and a relatively less aggressive control when it is far from it.

Optimal solution for b0=-2, v0=1
Value function approach Approximation approach Figure 5 uses the same four values of b 0 to compare the two methods at various variances, when the initial state is x 0 = 1. Again the difference is more noticeable when the t-statistics drops below 1. In the presence of extreme uncertainty, i.e. when this statistics falls below 0.5, and an initial state far from the desired path this difference not only gets larger and larger but it may be also associated with the approaches giving 'opposite' solutions, i.e. a positive control vs a negative control. This is what happens in the top left panel of Figure 5 where for very low t-statistics the value function approach suggests a positive control whereas the approximation approach suggests a slightly negative control.

t-stat and control versus variance b0=-2, x0=1
Value function approach Approximation approach t-stat The same qualitative results characterize a situation with a much smaller system variance, namely q = 0.01, as shown in Figures (6-9). In this scenario controls are less aggressive than in the previous one and, as previously seen, the approximation approach is generally less active than the competitor. It looks like the optimal control is insensitive to system noise when the parameter associated with it has very little uncertainty as in the top left panel of Figures 1 and 6. However, when the unknown parameter has a very low t-statistics the control is significantly affected by the system noise. Then the distinction between high and extreme uncertainty about the unknown parameter becomes even more relevant then before. At a preliminary examination it seems that a higher system noise has the effect of 'reducing' the perceived parameter uncertainty. For example, the bottom right panels in Figure 2 and 7 show the optimal controls when the t associated with the unknown parameter is around 0.5. This parameter uncertainty is associated with a very low system noise in the latter case. Therefore it is perceived in its real dimension and the approximation approach suggests a control in the 'opposite' direction when the initial state is far from its desired path.

Optimal solution for b0=-2, v0=1
Value function approach Approximation approach Figure 10 uses the same four values of b 0 to compare the two methods at various variances, when the initial state is x 0 = 1 and the system variance is q = 0.1. As in Figure 10 the difference is more noticeable when the t-statistics drops below 1.

t-stat and control versus variance b0=-2, x0=1
Value function approach Approximation approach t-stat It is unclear at this stage if the distinction between high uncertainty and extreme uncertainty is relevant also for the nonstationary case treated in Amman et al. (2018). A hint may be given by their Figure (8). It reports the results for the case where the parameter estimate is 0.3 and its variance is 0.49, i.e. the t -statistics of the unknown parameter is around 0.4. In this case the approximation approach is more active than the value function approach when the initial state is far from the desired path, i.e. x 0 greater than 3. This seems to suggest that the distinction between high and extreme uncertainty is relevant also when the system is nonstationary and no penalty is applied to the controls.

Conclusions
In a previous paper Amman et al. (2018) compare the value function and the approximation method in a situation where the controller is dealing with a nonstationary process and there is no penalty on the control. They conclude that differences are small when the effects of learning are limited. In this paper we find that similar results hold for the more commonly studied case of a controller facing a stationary process and a positive penalty on the control. Moreover we find that a good proxy for parameter uncertainty is the usual t -statistics and that it is very important to distinguish between high and in extreme uncertainty about the unknown parameter. In the latter situation, i.e. t close to 0, when the initial state is very far from its desired path and the parameter associated with the control is very small the approximation method becomes very active. Eventually it even perturbs the system in the 'opposite' direction. This is something that needs further investigation with other models and parameter sets. It may be due to the fact that the computational approximation to the integral needed in value function approach does not fully incorporate these extreme cases. Or it may the consequence of some hidden relationships between the parameters and the components of the costto-go in the approximation approach. However the behavior of the 'approximation control' makes full sense. Its suggestion is 'in the presence of extreme uncertainty don't be very active if you are close to the desired path but 'go wild' if you are far from it'. If this characteristics is confirmed it may represent a useful additional tool in the hands of the control researcher to discriminate between cases where the control can be reliably applied and cases where it cannot.