Dynamic pricing with finite price sets: a non-parametric approach

We study price optimization of perishable inventory over multiple, consecutive selling seasons in the presence of demand uncertainty. Each selling season consists of a finite number of discrete time periods, and demand per time period is Bernoulli distributed with price-dependent parameter. The set of feasible prices is finite, and the expected demand corresponding to each price is unknown to the seller, whose objective is to maximize cumulative expected revenue. We propose an algorithm that estimates the unknown parameters in a learning phase, and in each subsequent season applies a policy determined as the solution to a sample dynamic program, which modifies the underlying dynamic program by replacing the unknown parameters by the estimate. Revenue performance is measured by the regret: the expected revenue loss relative to the optimal attainable revenue under full information. For a given number of seasons n, we show that if the number of seasons allocated to learning is asymptotic to (n2logn)1/3\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$(n^2\log n)^{1/3}$$\end{document}, then the regret is of the same order, uniformly over all unknown demand parameters. An extensive numerical study that compares our algorithm to six benchmarks adapted from the literature demonstrates the effectiveness of our approach.


Background
Pricing of a perishable product is a central problem in many industries. As discussed by Talluri and van Ryzin (2005), a classical setting involves a firm facing successive seasons of finite length during which a finite fixed inventory is sold, and such that at the end of the season, unsold inventory expires worthless. The firm seeks to set prices in a way that maximizes the expected revenue. Instances of this problem are found in many industries, including fashion, retail, air travel, hospitality, and leisure. In Gallego and van Ryzin (1994), an optimal price is shown to be a function of the state (t, c) of the system, where t denotes remaining time to the end of the season and c denotes remaining inventory; this function increases with remaining time and decreases with remaining inventory. To compute these optimal prices, it is essential to know the relationship between price and expected demand-often referred to as the demand function or demand curve.
In practice, decision makers seldom have full knowledge about the demand process. The absence of full information about the demand process introduces a tension between demand learning ('exploration', by experimenting with selling prices) and revenue earning ('exploitation', i.e. using the estimated optimal prices). The longer one spends learning the demand properties, the less time remains to exploit that knowledge and earn revenue; on the other hand, less time spent on demand learning results in higher uncertainty that could diminish the revenue earned during the exploitation phase. A key feature of a good self-learning pricing algorithm is its ability to optimally balance this tension.
The problem of designing asymptotically optimal self-learning pricing algorithms has received considerable attention (see the literature review below). Several authors (e.g. Besbes and Zeevi 2009;Wang et al. 2014;Lei et al. 2014) have analyzed optimal pricing and learning with finite inventories in a particular asymptotic regime, where the performance of a price policy is evaluated when both the expected demand per season and the initial inventory grow large. In this so-called fluid regime, the problem is simplified by essentially removing the stochasticity of demand. This regime is well suited for applications where initial inventory and length of the selling season are large. However, in applications where initial inventory does not grow large, this asymptotic regime is not informative for a policy's performance. An informative example is ferry services. These are services that are regularly offered, with a finite selling season (tickets are sold until the departure of the ferry), finite inventory (determined by the size of the ferry), and with multiple selling seasons (corresponding to different days of departures). Another example comes from grocery retail: brick-and-mortar retail shops typically have a small inventory of each specific product to sell, and face many selling seasons during which a constant demand function might be postulated. Anecdotal evidence from the United Kingdom shows that it is not uncommon that the price is dropped several times, and, very near the closing time on the "use by" or "sell by" date, it is a small fraction of the original. In these examples, the relevant regime to study the performance of pricing algorithms is that of repeated seasons with bounded inventory, and not a regime where the size of the ferry, or the food inventory, goes to infinity.
Perhaps closest to our work is the study by den Boer and Zwart (2015), who consider dynamic pricing with finite inventories in a setting with multiple, consecutive selling seasons, each with fixed, finite inventory. The authors assume a parametric demand model, characterized by two unknown parameters that are learned from accumulating sales data, and design and analyze an asymptotically near-optimal pricing algorithm. A disadvantage of their parametric approach is the risk of model mis-specification: large losses can be incurred if the true demand function is not of the assumed form (see Sect. 6 for a numerical illustration). To mitigate this risk a non-parametric approach is needed, that (i) does not restrict itself to a parametrized sub-class of demand functions, and (ii) performs well in a regime with consecutive selling seasons with bounded initial inventory. Such an approach is taken by this paper.

Overview of contributions
We consider a monopolist seller of a finite inventory of a perishable product, which is sold during consecutive selling seasons. In the same spirit as den Boer and Zwart (2015), we formulate a discrete-time, finite-state Markov Decision Process (MDP) in which the underlying state is the pair (inventory, time-to-perish); this MDP characterizes optimal pricing under knowledge of the demand function; it is the central element in our setting where the transition probabilities are unknown to the seller. We assume a finite number κ of feasible prices. In our basic model, each season has a length of T periods and the same initial inventory x-this assumption is later relaxed to allow for non-identical selling seasons. In any period during which the ith price is offered, the demand for the product is a Bernoulli(λ i ) random variable. The vector of demand rates (purchase probabilities) λ = (λ 1 , . . . , λ κ ) ∈ [0, 1] κ is unknown to the seller. We emphasize that we make no assumptions on λ.
The algorithm that we propose separates a given selling horizon of n seasons into an exploration and an exploitation phase. The exploration phase rotates the prices throughout, so that each price is applied (nearly) the same number of times (periods); it concludes with a (maximum likelihood) estimate λ = ( λ 1 , . . . , λ κ ) of λ. A pricing policy is then constructed from the corresponding dynamic programming recursion, in which the unknown λ is replaced by the estimate λ; we refer to this recursion as the sample dynamic program. The exploitation phase applies this policy throughout all the remaining time periods. It is noteworthy that this is not a fixed-price policy; instead, the price depends on the system state (t, c) which is constantly evolving. Our main result establishes that a carefully tuned length of the exploration phase implies that our policy is consistent, and has regret O(n 2 log n) 1/3 ), uniformly over λ (see Theorem 1). Thus, the revenue generated by the proposed algorithm gets arbitrary close to the best achievable revenue under full knowledge of the unknown demand parameters, as n grows large. It is worth noting that this result holds without assuming that the initial inventory of some season grows large, as in antecedent literature. This theorem is then extended to a sequence of seasons that need not have the same initial inventory or length (see Theorem 2); the extension merely requires that the sequences of inventory levels and season lengths are bounded.
We provide an extensive numerical study in which we compare the performance of our algorithm to six alternatives, based on four papers in the literature: algorithms based on the fluid approximation in Besbes and Zeevi (2012, Algorithm 1, Section 3.1); adaptations of the upper-confidence-bound approach of Babaioff et al. (2015); Thompson sampling (Ferreira et al. 2018, Algorithm 2); and the method of den Boer and Zwart (2015) adapted for a finite price set. For a wide variety of demand functions we show that our policy outperforms these alternatives; see Sect. 6 for more details.

Related literature
The literature on pricing strategies is vast. We refer to Bitran and Caldentey (2003); Elmaghraby and Keskinocak (2003); Talluri and van Ryzin (2005); Gallego and Topaloglu (2019) for comprehensive reviews on the subject. A recent survey and classification that focuses on pricing and learning appears in den Boer (2015).
This paper is related to literature that addresses demand learning in dynamic pricing problems. For a single-product setting without an inventory constraint, examples are Broder and Rusmevichientong (2012); den Boer and Zwart (2014); Besbes and Zeevi (2015), and Keskin and Zeevi (2014), who design and analyze self-learning pricing algorithms under a variety of demand models. Closer to this paper is a stream of literature in which inventory is finite; implying that the optimal price is not a single value but a function of the system state (remaining time and remaining inventory). In Lin (2006), Aviv and Pazgal (2005), Araman andCaldentey (2009), andFarias andVan Roy (2010), the demand function is characterized by a single unknown parameter that is learned in a Bayesian fashion. Besbes and Zeevi (2009);Wang et al. (2014);Lei et al. (2014) consider more general demand models, but assume an asymptotic regime where inventory grows large; as explained above, results derived in this regime are not informative for applications where inventory is bounded. Demand learning for non-perishable products is also related to our work. A recent example is Chen et al. (2019), who employ non-parametric demand learning towards joint pricing and inventory decisions. The no-perish assumption means that demand may be instantly met by an inventory unit that was procured at an arbitrary time point in the past, which makes these types of problems different from the one considered in this paper.
In the literature that addresses demand learning, a common approach is to estimate, based on accumulating sales data, the optimal solution to some fluid model (approximation of the stochastic control problem) efficiently enough to (nearly) minimize (revenue) losses asymptotically. One approach, exemplified by Zeevi (2009, 2012), separates the selling season into disjoint pure-exploration (learning) and exploitation phases. A worst-case upper bound on losses is minimized by carefully selecting the amount of time to spend on learning, in a regime where the total expected demand and inventory level grow large at the same rate. A second approach formulates a multi-armed bandit problem, and deals with the exploration-exploitation tradeoff via a long-established method known as an upper-confidence-bound , and whose principle is "optimism in the face of uncertainty". Here, the usual maximum likelihood estimates of demand are replaced by upper confidence bounds, and exploration and exploitation occur simultaneously. This approach is exemplified by Babaioff et al. (2015); Badanidiyuru et al. (2013). Babaioff et al. (2015) address the case of a continuous price set; their upper-confidence bounds apply to the expected revenue associated to each price, where the set of prices is asymptotically dense on the price domain. Badanidiyuru et al. (2013) address the case of a finite price set, and study a general model where rewards (revenue) and resource consumption are sampled from an unknown time-invariant distribution. Using upper-and lower-confidence bounds on mean rewards and mean resource consumptions, respectively, they aim to determine an optimal time-invariant mix of prices; optimality is with respect to the linear program (fluid approximation) in Besbes and Zeevi (2012). These papers characterize the regret through upper and lower bounds in a regime where expected demand grows to infinity. Ferreira et al. (2018) employ a randomized Bayesian method known as Thompson sampling whose aim is to learn efficiently a mix of prices that is optimal with respect to the same linear program. In a regime where mean demand grows to infinity, they upper-bound the Bayesian regret: the conditional average regret given a prior distribution on the demand vector.
The relationship between the current paper and this stream of literature can be summarized as follows: in analogy with this literature, in our model the aggregate amount of inventory and the aggregate mean demand over n seasons grow proportionally to n; however, this growth does not occur within one season of "large size", but instead through a sequence of seasons with bounded inventory and season length, and common demand function. This boundedness entails that even if one prices optimally with respect to some fluid model, one has not closed the fluid gap: the difference in expected revenue between the optimal policy and the fluid-optimal policy. This gapwhich essentially arises by neglecting randomness of demand-may be negligible in cases where both inventory and length of the season are reasonably large. But, as observed by Maglaras (2011) (page 6), one would expect that the discrete and stochastic nature of the pricing problem to be [sic] relevant when selling 4 newly constructed single family homes over the course of 24 weeks, but it may be less relevant when selling 4000 pairs of skis over a similar time duration from, say, October to March.
Our model is designed precisely for settings where the fluid regime is not informative: that is, when inventory does not grow large but is finite (such as, e.g., in ferry services and grocery retail), and when neglecting the structure of the underlying MDP is detrimental in terms of revenue performance.
Perhaps closest to our work is den Boer and Zwart (2015), who study optimal pricing with multiple, consecutive selling seasons, each with finite initial inventory. In contrast to our paper, they work with a demand function of a known parametric form with two unknown parameters. They develop an (almost) certainty-equivalent strategy, which at all times maintains a parametric (quasi-maximum likelihood) estimate of the demand function, and prices optimally under the corresponding Markov decision process in which the unknown demand function is replaced by its estimate. The authors provide an upper bound on the regret after n selling seasons, and accompany this result by a lower bound that holds for any policy. A drawback of their parametric approach is the risk of model mis-specification: in reality, demand may not be of the assumed parametric form, and pricing recommendations may be suboptimal. In our model we mitigate this drawback by making no assumption whatsoever on the shape of the demand function.
The remainder of this paper Section 2 formulates the problem, defines the regret, and discusses key differences with alternative approaches. Section 3 presents the proposed strategy, and Sect. 4 contains the asymptotic performance analysis. The extension to non-identical seasons appears in Sect. 5. Section 6 presents the results of our numerical study. A few auxiliary results appear in the Appendix.
Notation The notation ":=" stands for "is defined as". A statement such as λ = λ( p) := f ( p) means that the function λ( p) is identical to f ( p), and that we may write λ instead of λ( p) or f ( p). We use N := {0, 1, 2, . . .} for the set of natural numbers. For a set A, A c denotes the complement. Given any sample space of which the generic element is denoted ω, and given a set A ⊆ , we write 1 [A] for 1 [ω∈A] , the random variable taking value 1 if the event A occurs, and 0 otherwise. For any real z, we write z for the floor, the largest integer that is no larger than z; z for the ceiling; [z] for the integer nearest to z; and z + for the positive part max{0, z}. For any vector z = (z i ) we define z := max i |z i |. A sum over an empty index set, for example 0 i=1 z i , is understood as zero. With a n and b n being nonnegative sequences, we write a n = O(b n ) if a n /b n is bounded from above by a constant; we write a n = (b n ) if a n /b n is bounded from below by a constant; and if a n /b n is bounded from both above and below, then we write a n b n .

Problem formulation
Basic elements We consider a monopolist seller of perishable products which are sold during consecutive selling seasons. Each season has a positive integer length of T (indivisible) time periods. At the start of each selling season the seller has a positive integer inventory (inventory) of x units, which can only be sold during that particular season. At the end of each season, any unsold inventory is worthless, and its disposal costs nothing. In our basic model, identical such seasons occur in succession: the ith season consists of the time periods indexed from (i − 1)T + 1 to i T , for all i = 1, 2, . . . , n, where n is a selling horizon that is known at time zero. In Sect. 5 we relax this assumption, and consider non-identical seasons.
There are κ distinct actions corresponding to prices that increase in the action index: 0 < p 1 < p 2 < . . . < p κ < ∞. There is additionally a shutoff action, indexed 0, whose sole function is to shut off the demand; the price associated to this action is immaterial (since no sale is ever made); thus we set p 0 = 0 without loss of generality. In each period u, the seller chooses an action A u ∈ A, where A = {0, 1, 2, . . . , κ}, and thus sets the price to p A u . After setting the price, a binary demand is observed, which indicates whether one unit is sold or not.
The demand is stochastic and price-dependent. Write D u for the demand in period u, and define the set H u := {(a 1 , . . . , a u , d 1 , . . . , d u ) ∈ A u × {0, 1} u }, for all u ∈ N, and H 0 := ∅. For each u = 1, . . . , nT , each element (a 1 , . . . , a u , d 1 , . . . , d u ) of H u is a potential history of prices and demand that the seller might observe in the first u time periods. A (pricing) strategy σ = (σ u ) u∈N is a collection of functions σ u : H u−1 → A such that at each time u = 1, . . . , nT , the seller's action is A u = σ u (A 1 , D 1 , . . . , A u−1 , D u−1 ). Thus, the policy specifies, for each possible data set of previously used prices and corresponding demand observations, which price should be used in the next time period.
Our main assumption with respect to the demand mechanism is that each action (price) a ∈ A is associated to a probability of purchase, λ a , which is unknown to the seller, except for the shut-off property λ 0 = 0. Specifically, we assume that, conditionally on A u = a, D u is Bernoulli distributed with mean λ a , for all a ∈ A, and is independent of past actions and demands {A 1 , . . . , A u−1 , D 1 , . . . , D u−1 }. The vector of purchase probabilities, λ := (λ 1 , . . . , λ κ ), is unknown to the seller. We write := [0, 1] κ for the set of all possible purchase probability vectors.
To describe the dynamics of the seller's remaining inventory, observe that any period u is contained in the season numbered u/T and corresponds to the seasonal remaining time t u := u/T T − u + 1 ∈ {1, . . . , T }, which is the number of periods that remain in the season containing period u. For example, for T = 10, the period indexed u = 11 is contained in season u/T = 2 and corresponds to a seasonal remaining time t 11 = 2 · 10 − 11 + 1 = 10. The end of any season coincides with the beginning of a new season; at any such boundary, any unused inventory from the ending season expires worthless and at no cost; the inventory of the new season is replenished to x, and the seasonal remaining time t u becomes T . The inventory at the beginning of period u is denoted C u throughout; in the basic model, it evolves as follows: The revenue earned in any period u is p A u min{C u , D u } = p A u 1 [C u >0] D u . Given a planning horizon of n seasons, the seller's objective is to determine a strategy σ that maximizes the expected revenue, nT denotes the expectation under strategy σ .
Optimal solution under full information Provided the probability vector λ is known, an optimal pricing strategy can be determined by solving a Markov Decision Process (MDP) corresponding to a single selling season. The states, transitions, and rewards of this MDP are defined as follows. A state (t, c) encodes that the seasonal remaining time is t and the remaining inventory is c. The set of states is The transition dynamics depend on the actions taken, and are as follows. If action a is used in state (t, c), then with probability λ a a state transition (t, c) → (t − 1, (c − 1) + ) occurs, and revenue p a 1 [c>0] is earned; with probability 1 − λ a a state transition (t, c) → (t − 1, c) occurs, and no revenue is earned.
A policy π is a set of actions at all states: π = (π t,c ) (t,c)∈X with π t,c ∈ A for each (t, c) ∈ X . The set of all policies is denoted , and is finite. Given any policy π and state (t, c) ∈ X , the value of the state is the expected revenue-to-go (to the end of the season) when starting in this state and using the actions of π ; it is denoted V π t,c . These values satisfy the recursion where By the finiteness of , there exists an optimal policy π * ∈ that maximizes V π T ,x . The optimal value at a state (t, c) is the maximum expected revenue-to-go, starting from that state; it is denoted V t,c . The values V and the policy π * are determined recursively, backward in time: is the maximum possible expected revenue of a seller that knows λ, for a season of length T and inventory x. By the (conditional) independence of demand across seasons, an optimal strategy consists of applying π * in each season s = 1, . . . , n. Performance measure The regret of a strategy σ over the first n selling seasons is defined as it depends on the unknown λ, and also on x and T . The regret is the (expected) revenue loss incurred by strategy σ relative to the optimal strategy of using the policy π * in each season. The regret is based on an integer number of seasons, rather than an integer number of periods; this is natural, since policy (and revenue) optimality is with respect to a whole season and not any individual period. In our numerical study we mainly work with the relative regret, defined and denoted as R n := R n (σ ) := R n (σ ; λ, x, T ) := R n (σ, λ, x, T )/(nV T ,x ). By definition, the value of the relative regret is a number that always lies in the interval [0, 1]; the smaller its value, the better the performance of σ ; a value of zero indicates that σ extracts the maximum possible revenue.

Proposed pricing strategy
In this section we propose a data-driven pricing strategy that learns the optimal policy defined in Sect. 2. The strategy divides the time horizon into two phases, an exploration phase and an exploitation phase. In the exploration phase, all prices are used a nearly equal number of times, and the obtained sales data is used to construct an estimate λ of the unknown demand vector λ. In the exploitation phase, the policy π * defined in Sect. 2, with λ replaced by its estimate λ, is used in all remaining selling seasons.
More specifically, given an estimate λ = ( λ 1 , . . . , λ κ ) (purely from the exploration phase here; this is relaxed later), the policy that is used throughout the exploitation phase is the solution of the sample dynamic program: where V t,c := V t,c − V t,c−1 , V t,0 := 0 for all t, and V 0,c := V 0,c = 0 for all c. In particular, the shutoff action is excluded at all states with t ≥ 1 and c ≥ 1. We denote this policy asπ , or, to make the dependence on λ explicit, as π( λ).
; it is the count of time periods up to (including) τ during which the price is p i .
(a) For all u = 1, . . . , τ T : if C u > 0, then set A u as the action i for which N i (u − 1) is the smallest (in case of a tie, select the price with the lowest index); formally, ] be the count of time periods in the first τ seasons during which the price was p i , and let Step 3 (Exploitation). For each season s = τ + 1, . . . , n, apply the policy π( λ) defined in (4).
Step 2(a) ensures a near-parity of price-specific sample sizes at all times (the motivation for this is seen in proofs that follow). On a high level, this strategy is reminiscent of classical explore-then-commit policies of multi-armed bandit problems; see, e.g., Lattimore and Szepesvári (2019), Chapter 6. These policies divide the time horizon into two phases. In the first phase all actions are tried a number of times, in order to estimate the expected revenue associated to each action. In the second phase, an action with the highest estimated expected revenue is used at all times. Our strategy loosely adapts this idea to the MDP in Sect. 2: an optimal 'action' (of the multi-armed bandit problem) corresponds to an optimal policy for the MDP here. In the exploitation phase, we thus use the estimated state-dependent optimal prices (i.e., the estimated optimal policyπ) and not a fixed price at all times.

Upper bound
In this section we show that the prices generated by our pricing strategy converge to the optimal prices corresponding to the MDP defined in Sect. 2, as the number of selling seasons n grows large. More precisely, we prove that the regret of strategy σ (τ n ) is bounded above by a constant times n 2/3 log(n) 1/3 , under a suitable choice of the exploration length τ n . The constant depends only on x and T and grows at most linearly in each. This bound holds uniformly over all probability vectors λ. Equivalently, the relative regret converges to zero at rate O((log(n)/n) 1/3 ), uniformly over all λ.
Theorem 1 Set τ n (n 2 log n) 1/3 . Then, there exists a finite positive constant K 1 such that, for all n ≥ 2, To prove the theorem, we first provide a bound on the estimation error (Proposition 1) and a bound on the effect of this error during the exploitation phase (Proposition 2). Let λ n be the estimator of λ corresponding to σ (τ n ); that is, obtained after an exploration period consisting of τ n seasons. Recall that . . , κ, is the number of times up to the end of the learning phase that the price on offer is p i .

Proposition 1 (Estimation error)
Proof of Proposition 1 Let n ∈ N and δ > 0. We first obtain a lower bound on N i = N i (τ n T ) for each i. Let u denote a period of the learning phase such that the inventory is positive, that is, 1 ≤ u ≤ τ n T and C u > 0. Any such period u contributes one unit Inequality (a) holds because the learning phase consists of τ n seasons, in each of which there are at least min{T , x} periods u such that C u > 0 (since there are x units initially, T sale periods, and no more than one unit is sold per period). Inequality (b) is the near-parity of sample sizes across prices, which is ensured by step 2(a) in the definition of σ . Now (6) implies Now define the event E n : where step (a) is justified as follows: where step (b) follows from a union bound. To justify step (c), observe that λ n,i is the sample mean of N i ≥ f τ n i.i.d. Bernoulli(λ i ) random variables, and by Next, we bound the loss incurred by policy π( λ) against the optimal one.
Proposition 2 (Effect of estimation error) Let p := max a∈A p a . Then Proof of Proposition 2 Let := max a | λ a − λ a |. We will prove two results: the value estimates V are close to the optimal values: and the values Vπ of the policy associated to V are close to these estimates: For all (t, c) such that t = 0 or c = 0, V t,c =V t,c . In addition, for all c, Then, for all actions a, This proves (11). We now consider (12). (12). Now (10) follows directly from (11) and (12).
We now prove Theorem 1 in two steps. First, we apply Propositions 1 and Proposition 2 to obtain an upper bound on the regret incurred during the exploitation phase. The regret incurred during the exploration phase is upper-bounded by a constant times the duration of this phase. Then, we show that the choice τ n (n 2 log n) 1/3 minimizes the order of this upper bound, and obtain the O(n 2 log n) 1/3 ) upper bound on the regret.

Proof of Theorem 1
Let n ∈ N, n ≥ 2. We proceed in two steps.
Here (a) is argued as follows: the learning phase consists of τ n seasons, in each of which the expected loss relative to the optimum is at most V. The exploitation phase consists of n −τ n seasons, in each of which the conditional expected loss relative to the optimum, given λ n , is V − Vπ ( λ n ).
Step (b) then follows directly from Proposition 2.
Each of the summands in the right side of (14) is O(n 2 log n) 1/3 , provided that For the term τ n V in (14), this follows directly from (15). For the second term on the right side of (14), note that n · 4T p · δ n ≤ 4T p · c δ · (n 2 log n) 1/3 , and (a) follows from the lower bound in (15) combined with (16) (since τ n δ 2 n ≥ c τ c 2 δ log n), and step (b) follows from the definition of c δ . Putting all terms together, we obtain sup λ∈ R n (σ (τ n ); λ) ≤ K 1 (n 2 log n) 1/3 , where Remark 1 A choice of τ n that is consistent with Theorem 1 is To motivate the formula, observe that (17) shows that the exponential term exp − 2 f τ n δ 2 n in (14) is O(n −1/3 ), and therefore lim sup The right side is minimized by setting c τ = c τ and minimizing with respect to this single variable; this yields the value in (18).

Strength of bound
If T = 1, our problem reduces to a conventional multi-armed bandit problem. It is known (see, e.g., Lattimore and Szepesvári (2019), Exercise 15.6) that in this setting, the (worst-case) regret of explore-then-commit type of strategies grows as n 2/3 . This implies that the n 2/3 growth rate (up to logarithmic terms) in Theorem 1 cannot be improved by more refined proof techniques, but is an intrinsic property of the strategy σ .
It is also known that in multi-armed bandit problems with K ∈ N actions (arms), strategies such as MOSS Audibert and Bubeck (2009) achieve O( √ K n) worst-case regret, and this rate is the best possible (Vogel 1960;. Neither this policy nor this characterization of the best possible growth rate of regret are directly transferable to an informative statement in our setting: if we would naively treat our problem as a multi-armed bandit problem, then each of the K arms in the multi-armed bandit problem would correspond to a policy π as defined in Sect. 2; as a result, the number of actions would be K = κ T ·x and hence the lower bound √ K n would be prohibitively large in many instances that are practically relevant. For example, in our numerical study in Sect. 6 we consider κ = 10, x = 100, and T = 65, which could correspond to 10 6500 different policies. There do exist algorithms for multiarmed bandit problems with an underlying MDP structure (e.g. Burnetas and Katehakis 1997;Even-Dar et al. 2006;Auer and Ortner 2007). Specific to our problem is that the transition probabilities of the MDP are unknown and governed by the same unknown parameters λ, for each state (where inventory is available); this particular structure is exploited by the design of σ . Furthermore, Even-Dar et al. (2006, Theorem 13) provides upper and lower bounds (holding with high probability) on the value functions of a finite-state MDP; these bounds grow linearly in the time horizon, matching the growth rate of the multiplier (2 pt) that we establish in Proposition 2. This suggests that the linear dependence of regret on the time horizon cannot be improved.
It is also insightful to compare our regret bound of Theorem 1 to the logarithmic regret obtained by den Boer and Zwart (2015). These authors study a parametric model where the unknown demand function is characterized by two parameters. It is shown that 'learning takes care of itself'; a near-myopic policy with full emphasis on 'exploitation' performs very well and learns the parameters 'on the fly'. This property is not true in our case; a myopic policy that does not pay careful attention to exploring all actions sufficiently often would incur a loss that grows linearly with n. The need to put more emphasis on 'exploration' naturally induces a higher regret rate.
An interesting direction for future research is to see whether the n 2/3 rate of Theorem 1 can be improved, and to prove a lower bound on the (worst-case) regret achieved by any policy.

Extension
In this section, seasons are allowed to be non-identical: season length and initial inventory are allowed to vary across different seasons. Two strategies are studied: (i) strategy σ merely extends σ to allow non-identical seasons; when seasons are identical, the two strategies coincide; (ii) strategy σ extends σ in the same sense, but also modifies it by requiring policy updates during exploitation. In our numerical results (all of which use identical seasons), σ outperforms σ for modest time horizons; this is what motivates it. We prove a performance guarantee analogous to that in Theorem 1 for both σ and σ . We revise the model of Sect. 2 as follows. At the beginning of any season s, the inventory is replenished to x s ∈ N, and the seasonal remaining time is T s ∈ N. The terms initial inventory and season length, when speaking of season s, refer to x s and T s , respectively. The sequences of season lengths and initial inventories are bounded: T := sup j∈N T j < ∞, and x := sup j∈N x j < ∞. All seasons share the same set of feasible prices, { p 1 , . . . , p κ }, and vector of purchase probabilities, λ = (λ 1 , . . . , λ κ ). The inventory dynamics are The regret of a strategy σ over the first n seasons is the set of time periods belonging to season s, E σ denotes expectation under σ , and V s := V T s ,x s is the optimal value for season s under full-information, as defined in Sect. 2. The strategy with policy updates is: Strategy σ (τ ).

Proof of Theorem 2
The proof follows that of Theorem 1. Let n ≥ 2.
Step 1. For all s ∈ {τ n + 1, . . . , n}, let λ n,s be the estimate obtained in step (3a) of σ (τ n ), and let π n,s be the policy applied in step (3b), with value V n,s := V π n,s T s ,x s ( λ n,s ) as defined in (2). Then where (a) and (b) are simple extensions to their counterparts in (13).
Step 2. We claim that for any δ > 0, where f := min 1≤s≤τ n min{T s , x s }/κ ≥ 1/κ > 0. To prove this, we bound the sample sizes associated to λ n,s from below: where (a) uses the fact that in each season s ≤ τ n , the inventory is positive during at least min 1≤s≤τ n min{T s , x s } = f κ periods, combined with the near-parity of sample sizes (|N i − N j | ≤ 1 for i = j). Now (23) follows from (24) as in the proof of Proposition 1 , with f replaced by f . The remainder of the proof mimics that of Theorem 1, step 2, with f replaced by f .
We now define the strategy σ and state a performance guarantee for it in Corollary 1 below; the proof follows easily from that of Theorem 2.

Numerical results
Strategies σ , σ will be compared with six others, which are all recent and different strategies with proven performance guarantees in particular settings: (1) (2015), adapted for a finite price set. The next section elaborates these alternatives.

Alternative strategies
Fluid-based strategies σ F and σ F These strategies are inspired by Besbes and Zeevi (2012, Algorithm 1, Section 3.1). With λ momentarily assumed known, consider the linear program and define a fluid-optimal policy π F = π F (λ) as follows. Let t := (t 1 , . . . , t κ ) = t(λ) be an extreme-point optimal solution of the linear program. Let m be the number of positive elements of t, and note m is either one or two. If m = 1, then apply, until stock-out or the season's end, the price that corresponds to the unique component of t that is positive. In case that m = 2, let i 1 and i 2 be the indices of the two positive elements of t, ordered in increasing revenue rate: λ i 1 p i 1 ≤ λ i 2 p i 2 , and apply, until stock-out or the season's end, price p i 1 for the first [t i 1 ] periods and price p i 2 otherwise. The ordering "first p i 1 , then p i 2 " is chosen because it performed somewhat better than the reverse one (first p i 2 , then p i 1 ) in our small-inventory cases (x = 10), while the two were indistinguishable when x = 100. Besbes and Zeevi (2012) tacitly prove that the ordering is immaterial (in their model) as inventory grows large: it appears neither in Algorithm 1 there, nor in the associated regret bound (Besbes and Zeevi 2012, Theorem 1). We now define two explore-then-exploit strategies for a horizon of n seasons: Step 1. During the first τ seasons, price to learn, maintaining near-parity of sample sizes (similar as under σ ). Let λ be the estimate of λ based on the history up to the end of season τ .
Step 2, Strategy σ F . For each season s = τ + 1, . . . , n: (a) Let λ s be the estimate of λ, based on the history up to season s. (b) Apply the counterpart of π F in which λ is replaced by λ s .
Note that σ F fixes a single policy throughout the exploitation phase, whereas σ F reestimates and updates the policy in each season. In choosing τ , we considered the following variants: τ = c τ (n 2 log n) 1/3 n i/10 for i ∈ {−2, −1, 0, 1, 2}, with c τ as in (18). For n = 10 6 (the largest value we considered), the regret was similar for i ∈ {−2, −1, 0}, and larger otherwise. We therefore choose i = 0, i.e., set τ = τ n as in (18), and we claim that the performance and inconsistency reported below are not artefacts of having chosen τ poorly as a function of n. Fluid-based, upper-confidence-bound strategies σ U and σ U Babaioff et al. (2015) approximate the total revenue over a season as r ( p) = r ( p; x, T ) = r ( p; x, T , λ(·)) := p · min(x, T λ( p)), where the price p lies in a continuous domain, and λ( p) is the associated purchase probability. They pursue a fixed-price policy that prices at the maximizer of r (·). Their method is asymptotically optimal in their setting, and uses an upper confidence bound (UCB) for each r ( p), for p in an appropriate finite set that is asymptotically dense in the continuous domain. Translating this approach to the finite-price setting, we seek to price at i * := min arg max i∈A r ( p i ) through their UCB method detailed below. Babaioff et al. (2015, Section 4) also mention a "tempting", dynamic alternative, in which, at each time u, the total remaining revenue in the season is approximated by r u ( p i ) = r ( p i ; C u , t u ) := p i ·min(C u , t u λ i ) (where C u is the remaining inventory and t u is the remaining time), and one aims to price at the maximizer of r u (·), via the same UCB method. To implement both these variants, we use the upper confidence bounds in Babaioff et al. (2015), as follows: let N i (u) denote the number of periods before u in which the chosen price was equal to p i ; let S i (u) denote the total sales obtained during these periods; and define Here, I u,i is an upper confidence bound on r ( p i ), with the radius ρ u,i motivated in Babaioff et al. (2015). In addition, define an index I u,i as in (27), with x and T replaced by C u and t u respectively; this index is an upper confidence bound on r u ( p i ). We now define two strategies for n seasons of length T . Strategy σ U For all u = 1, . . . , nT , set A u = min arg max 1≤i≤κ I u,i if C u > 0 and set A u = 0 if C u = 0.
Strategy σ U For all u = 1, . . . , nT , set A u = min arg max 1≤i≤κ I u,i if C u > 0, and set A u = 0 if C u = 0.
Thus, both these strategies seek to charge a price that maximizes the UCB on corresponding expected revenue; in case of a tie, the smallest maximizing price is selected.
Thompson-sampling strategy σ T This strategy is an adaptation of Ferreira et al. (2018, Algorithm 2), which is based on Bayesian estimation of λ, and, according to the authors, 'addresses the challenge of balancing the exploration-exploitation tradeoff under the presence of inventory constraints'. Following them, the prior distribution on λ consists of independent Uniform(0,1) marginals; and since λ is constant over seasons, it is natural that we apply their (Bayesian) estimator to the data from all past time periods.
Strategy σ T Repeat the following steps for all u = 1, . . . , nT : Sample Demand. For each i = 1, . . . , κ, letλ i be an independent sample from the Beta(S i (u) + 1, N i (u) − S i (u) + 1) distribution, where N i (u) denotes the number of periods before u such that the chosen price was p i , and S i (u) denotes the total sales obtained during these periods.
Price. Let t := (t 1 , . . . , t κ ) be an optimal solution to the linear program . . . , κ , (28) where C u and t u denote the season's remaining inventory and remaining time in period u, respectively. Set A u randomly to one of 1, 2, . . . , κ, 0 with respective probabilities t 1 , t 2 , . . . , t κ , 1 − κ i=1 t i . Update history. Observe the demand D u and update the history.
Parametric strategy σ P This strategy is our adaptation of den Boer and Zwart (2015) to the finite-price setting. Its basis is the assumption that any price p entails the purchase probability λ( p) = η(β 1 + β 2 p), where η(z) := exp(z)/(1 + exp(z)), and β := (β 1 , β 2 ) are unknown parameters. By the conditional independence in our model, This is a Generalized Linear Model (GLM) with (canonical) link function η(·); thus, maximum-likelihood estimates of β are computable by standard methods. Strategy σ P Repeat the following steps for all u = 1, . . . , nT : Estimate the purchase probabilities. Compute β u−1, j . as a maximum-likelihood estimate of β j ( j = 1, 2) under the GLM (29)  Price. Let π u be the optimal action, defined as in (4) with probabilities there being the estimates λ u,a computed above. Set A u = π u , except only if π u is such that, during the completed current season, no price-dispersion occurs (i.e., t u = 1, C u = 1, and setting A u = π u would make the actions, A u for all u of the completed season, equal); in this case only, set A u by altering π u to the nearest action towards the mid-point of the price domain. Update history. Observe the demand D u and update the history.

Consistency
A strategy is said to be consistent if its relative regret converges to zero as n → ∞ (equivalently, its regret is o(n)) uniformly over λ ∈ . Strategies σ and σ are consistent, by Theorem 1 and Theorem 2, respectively. In contrast, all six alternative strategies may fail to be consistent. Inconsistency of σ F and σ F Let V F = V F (λ) denote the expected per-season revenue of policy π F . Loosely speaking, these strategies incur revenue losses of two types relative to pricing optimally (i.e., using π * in all seasons): (i) the loss V − V F = V (λ) − V F (λ); (ii) the loss due to not knowing π F exactly. More formally, suppose that (i) for some λ 0 ∈ we have V (λ 0 ) − V F (λ 0 ) > 0; and (ii) the value of the policy applied in any exploitation season (the counterpart of π F ) does not exceed V F (λ 0 ), and τ n /n → 0. Then, the exploitation phase incurs a loss at least . In this setting, we have lim inf n→∞ R n (·; λ 0 )/(nV ) ≥ 1 − V F (λ 0 )/V (λ 0 ) > 0 for these two strategies; thus, the right side is a fundamental lower bound on the relative regret, and we call it the fluid gap.
Inconsistency of σ U and σ U Let π U denote the (single-fixed-price) policy that prices at p i * , where i * is defined in (26), and let V U = V U (λ) denote its expected revenue, defined in (2). Assuming that there exists λ 0 ∈ such that V U (λ 0 ) < V (λ 0 ), and that the (expected) per-season revenue under σ U is at most V U (λ 0 ) for all seasons that are large enough (a reasonable assumption), then we have lack of consistency: Thus, the right side is a fundamental lower bound on the relative regret, and we call it the fixed-price gap. Strategy σ U does not lend itself to a similar argument (since the functions r u () involve the stochastic process {(C u , t u ) : u ≥ 1}, which is difficult to analyze). Since it is indifferent to the MDP, we expect that, for some λ 0 ∈ , its asymptotic per-season revenue is smaller than V (λ 0 ) , i.e., that it is inconsistent; our numerical results below confirm this.
Inconsistency of σ T The guarantee in Ferreira et al. (2018, Theorem 2) is inconsequential in our setting for two reasons: (i) their upper bound is on Bayesian regret, while ours is on worst-case regret (over all possible values of λ ∈ ) and (ii) by the boundedness of season lengths, the Bayesian regret need not vanish as the season index n → ∞. Since σ T is indifferent to the MDP, we expect that, for some λ 0 ∈ , its asymptotic per-season revenue is smaller than V (λ 0 ), i.e., that it is inconsistent; our numerical results below confirm this.
Inconsistency of σ P This strategy runs the risk of model misspecification discussed earlier: if the demand function cannot be appropriately approximated by the assumed parametric model, then, even with an abundance of sales data, the action it prescribes may differ from the optimal one (entailing an asymptotic per-season revenue smaller than the optimum). Our numerical results below confirm this, and include cases where the inconsistency gap is large.

Numerical study
Main part: regret with emphasis on the effect of n We compare the performance of σ , σ with that of the six alternatives σ F , σ F , σ U , σ U , σ T , and σ P . We consider identical seasons (Sect. 2) and the following demand functions:  Fig. 1 The demand and revenue vectors, and the generating continuously-supported analogs for the step, linear, logit, and exponential cases The step function is chosen to have a small number of discontinuities of substantial size; all other three demand functions are continuous. (See den Boer and Keskin (2019) for several practical applications where demand discontinuities may arise). The number of prices is set to κ = 10, and the price points to p i := (i − 0.5)/κ for i = 1, 2, . . . , κ. This results in the demand vectors λ i = [λ i ( p 1 ), . . . , λ i ( p κ )] for i ∈ {1, 2, 3, 4}. Figure 1 depicts the demand vectors λ i ; the revenue vectors { p j λ i ( p j )} 10 j=1 ; and the underlying continuously-supported analogs (λ i ( p) and pλ i ( p) for p ∈ [0, 1]), for all i. Onwards, the four demand vectors are treated in a unified manner.
Inventory is set at the levels x 1 = 10 and x 2 = 100. We examine the effect of demand strength systematically as follows. For each demand vector λ i , i ∈ {1, 2, 3, 4}, let p U i be the revenue-rate maximizing price (i.e. λ i ( p U i ) p U i = max j λ i ( p j ) p j ). We want the mean demand, when price p U i is applied throughout the season, to be as close as possible to the inventory x k times a demand-strength factor c j ; this is achieved by setting the season length as We set c j = (3/4) · 2 j−1 for j = 1, 2, 3 to create scenarios of low, medium, and high demand, respectively. The planning horizon (number of seasons) n varies along powers of 10 from small (10) to large (10 6 ).
Part 2 In this part we keep the inventory and demand vectors as previously; and we modify the range of demand strength, making it far wider relative to the main experiment: season lengths are set as usual by (30), but now across c j = (3/4) · 2 j for j = −1, 0, 1, . . . , 6; thus, for j = −1 the mean demand is very weak (37.5% of inventory), while for j = 6 the mean demand is extremely strong (48 times the inventory). For each resulting case, we report: (a) the fluid gap and the fixed-price gap (defined in Sect. 6.2); and (b) the estimated relative regret of selected strategies for n = 10 2 and n = 10 4 , with the latter serving as a large-n example (estimation details are discussed in Sect. 6.4 below).
Computing cost We study the strategies' computing cost, defined as the time spent computing the price (A u for all relevant u), as measured within our matlab code via the recommended method, commands tic and toc. This is done in an experiment where variable independent factors are the inventory, taking values x 1 = 50, x 2 = 100, and x 3 = 200; the demand vector λ i , i ∈ {1, 2, 3, 4}; and the demand strength, taking values c j = (3/4) · 2 j for j ∈ {0, 1, 2, 3}; setting season lengths as in (30), the mean demand is 75%, 150%, 300%, and 600% of the inventory, for each k and i. For the dependence of the cost on n, our numerical experience suggests a clear distinction between the 'update' strategies (σ , σ F , σ U , σ U , σ T and σ P ) and the others (σ and σ F ); for the former, the cost is (nearly) linear in n, that is, very close to Cn, where C is the expected cost per season and is strategy-specific, while for the latter it does not change substantially with n. Now, the issues of main interest are (i) the dependence of the update-type strategies' C on the inventory x and season length T ; and (ii) the cross-strategy comparison of these C's. We answer these questions based on n = 100 seasons; based on unreported results with n = 10 3 , we expect that these answers would be essentially unchanged for all n ≥ 100.
Main results: relative regret and consistency For each strategy we have 24 cases generated by inventory x ∈ {10, 100}; demand vector λ i for i = 1, 2, 3, 4; demand level c j , j = 1, 2, 3 (low, medium, and high). For each and each n = 10 k with k = 1, . . . , 6 (except for σ P with k = 5, 6) we compute an estimate of R n = R n (σ ) that is as accurate as possible subject to our computer-time constraints (see details in paragraph 'Estimation and accuracy' below). These estimates are denoted R n = R n (σ ) and are reported in Figs. 2 and 3, for x = 10 and x = 100 respectively.
Estimation and accuracy For each and each case, a case-specific number n rep = n rep (σ , x, i, j, n) of independent simulations (replications) (of n seasons) is run; each replication yields one sample value of the revenue loss relative to the optimum, nV x,T ; this, divided by this optimum, is a sample value of R n = R n (σ ). The estimate R n is the average of these n rep samples; its relative error (inverse accuracy) is SE( R n )/ R n , where SE( R n ) is the sample standard deviation divided by √ n rep . We do not simulate σ P for large n (n ∈ {10 5 , 10 6 }), because the simulation cost is exceptionally high, except for the two cases where x = 10, demand vector is λ 1 , and demand strength is medium or high (which demonstrate the large inconsistency gap); points missing in the figures are due to this choice. Except for σ T and σ P , the accuracy is good for all n (relative error less than 5%; for σ and σ F , less than 2%). The accuracy decreases somewhat for σ T and σ P as n increases, but only when R n (·) is very small, in which case our comparisons are unaffected, even if we replace this estimate by the normalbased, 95%-confidence lower or upper bound. This limitation is unavoidable and due to the excessive simulation cost (for example, for n = 10 6 , a single replication of σ T for x = 100, i = 1, and j = 1 requires 3.5 ×10 5 seconds; and σ P would require much more). Figures 5 and 6 show, for x = 10 and x = 100 respectively, the two gaps and the relative regrets for n = 10 2 and n = 10 4 . Since these gaps explain the large-n regret of π F and π U , we examine them more carefully. We define the fluid (LP) slack and the fixed-price (FP) slack as the fraction of inventory that is unused under the optimal solution in the fluid model underlying π F and π U , respectively. Figure 7 gives a detailed view of these optimal solutions for each of the 8 cases (two x, four λ), showing, in each case, the optimal price (or prices in the former case), the slack, and the gap. Insights obtained are: (a) these gaps are, respectively, tight lower bounds, on the large-n relative regret of σ F and σ U ; and (b) at the extremes stated above, σ is not competitive, which is explained by the fact that the optimal policy is nearly a fixedprice one, and consequently the gaps nearly vanish. Since these results are secondary, they are presented (and discussed) in the Appendix.

Part-2 results
Results on computing cost We have defined 48 = 3 × 4 × 4 design points. With C k,i, j denoting the cost of strategy σ at the point (k, i, j) (i.e., with capacity x k , demand function i, and demand strength c j ), we report the summary statistics over the design, C min = min k,i, j C k,i, j , C avg = 1 48 3 k=1 4 i=1 4 j=1 C k,i, j , and C max = max k,i, j C k,i, j . Moreover, we develop a cost model C (x, T ) ≈ 10 β 0 x β x T β T , and estimate it by allowing random error over the design points: where k,i, j are random errors. We compute (least-squares) estimatesβ 0 ,β x , andβ T ; these imply the model C (x, T ) ≈ 10β xβ x Tβ T . For each = 1, . . . , 6, the data C k,i, j and the (estimated) model are visualized in Fig. 4; the model and the summary statistics are reported in Table 1.

Discussion
This Sect. discusses results and insights from the numerical study. Our main results are Figs. 2 and 3, showing for each = 1, . . . , 8 the (estimated) relative regret R n (σ ) as a function of n, with both axes shown in logarithmic scale. Regret consistency and rate of convergence For σ and σ , the relative regret is nearly a straight line with slope −1/3 in all 24 cases, in line with Theorems 1 and 2. For each alternative except σ P , the relative regret flattens (the slope approaches zero) as n increases in all cases except the low-demand ones, i.e., in all sub-figures except those in the left column. For σ P , a flattening of the relative regret is evident only for the demand vector λ 1 . Whenever a flattening is evident, the flattened value, i.e., R n (σ ) for the largest available n, is a reasonable proxy for the inconsistency (gap), defined as lim n→∞ R n (σ ) (and also discussed in Sect. 6.2). The size of the gap varies with the case and strategy, and is also discussed below. Crucially, σ and σ enjoy a consistency and convergence rate that hold uniformly across all cases (inventory, demand vector, demand strength), while no other strategy achieves this uniform consistency, even if some perform very well in some cases.
Inconsistent strategies: the size of the gap and consequences thereof Given some inconsistent strategy σ , i.e., with gap lim n→∞ R n (σ ) > 0, each of σ and σ outperforms σ for all n ≥ n 0 , for some n 0 = n 0 ( ). Such n 0 ≤ 10 6 are often exhibited in these figures; for example, for x = 10, λ 1 , and high demand, choosing σ against σ T , Fig. 4 In each figure, the height is log 10 (C), where C is the (per-season) cost; the two axes in the horizontal plane are log 10 (x) and log 10 (T ). Each figure = 1, . . . , 6 shows the cost points log 10 (C k,i, j ) for all (or most) k, i, j. The plane shown is the estimated model. Figures correspond to strategies as follows: σ , σ F (first row, from left to right); σ U , σ U (second row); σ T and σ P (third row) we see that n 0 is about 100; against any other strategy, n 0 is at most 10. In a number of cases, the gap is apparently small, so we do not obtain n 0 ≤ 10 6 . We see two prominent groups with apparently small gap: (1) all strategies, the low-demand cases (sub-figures in the left column of Figs. 2 and 3); and (2) strategy σ P , under vectors λ 2 to λ 4 . These groups are discussed further, respectively, in the two paragraphs that follow. Strategies σ T and σ P appear to be strong contenders: for vectors other than λ 1 , and with medium or high demand, their gap is small and their relative regret is smaller than that of σ and σ for many n (n smaller than the n 0 above). That said, taking σ T as the contender, we only fail to demonstrate n 0 ≤ 10 6 in one out of the 16 non-low-demand cases (the case x = 100, λ 3 , and high demand).

Effect of extreme-demand scenarios
The low-demand cases stand out in that a flattening of the relative regret (of alternatives) is virtually absent; thus, the gaps appear to be notably smaller compared to the other demand levels. Such an effect might occur in both extremes, i.e, extreme-low and extreme-high demand (evidence that includes both extremes is provided by Figs. 5 and 6 in the Appendix). To explain this effect, note that in these extremes the solution to the MDP is nearly a fixed-price policy: under extreme-low demand, it is the revenue-rate maximizer; under extreme-high demand, it is the maximal price p. In these extremes, since a fixed price is nearly optimal, a focus on the MDP may be unwarranted, and our approach may under-perform.
Strategy σ P Figures 2 and 3 suggest that the gaps of σ P under λ 2 to λ 4 tend to be small. However, these gaps are hard to measure accurately, not only because they are apparently small, but also because it is impractical to increase n further (as discussed in Sect. 6.4, paragraph Estimation and accuracy). Remarkably, σ P is the worst performer (of all strategies) under demand vector λ 1 and the best performer otherwise (demand vectors λ 2 to λ 4 ). This contrast is explained by the parametric nature on σ P : the parametric demand model on which it is based fails to contain a good approximation to λ 1 , while it apparently succeeds in the other cases.
The following two paragraphs discuss the effect of inventory; this discussion is empirical and its importance secondary.
Effect of inventory on small-n regrets Inventory usually has a negative effect on the finite-n relative regrets. Indicatively, we consider σ and σ T for n = 100. For x = 10, R n (σ ) ranges across the 12 cases from (about) 6.4% to 9.5%; for x = 100 the range is 3.3% to 4.4%. For σ T , for x = 10 the range is 1.5% to 17.4%, and for x = 100 the range is 0.23% to 2.4%.
Effect of inventory on worst-case gaps of σ T and σ P We consider the effect of inventory on the worst (largest across the 12 cases) gaps of σ T and σ P , as approximated by R n (σ ) for the largest n in each case. For σ T we obtain: max i, j R 10 6 (σ T ; λ i , x, j) occurs, for both x, at i = 1 and j = 3 (step-and high-demand); it is about 4.1% for x = 10 but only 0.67% for x = 100. For σ P , the maxima also occur in the case i = 1, j = 3, but the inventory has almost no effect (max i, j R n (σ P ; x, i, j) is 34.9% for x = 10 and 35.1% for x = 100).
Computing cost Figure 4 and Table 1 show that the cost of each strategy is explained well, over the design range, by the three-parameter model (31), except only for σ F , whose main cost is that of solving a linear program (LP), which is insensitive in x and T . Strategy σ costs well above σ U and σ U , but well below σ T and σ P . The cost of σ P grows faster in T than that of any other strategy, as seen by its larger β T . To summarize the elements of these costs, we define an active (time) period as one such that neither time nor inventory has run out (in the current season). Then, σ solves one MDP in each season; σ P solves one MDP, and in addition estimates a Generalized Linear Model, in each active period; σ F solves one LP in each season; σ T solves one LP in each active period; σ U and σ U only require a few elementary numerical operations in each active period.
Summary and insights Our main conclusion is the uniform consistency and convergence rate of our approach (σ and σ ) across all possible demand (purchase probability) vectors, a feature not enjoyed by any other strategy; this is evidenced by considering the totality of cases in each of Figs. 2 and 3. The consistency implies that our approach outperforms any inconsistent strategy for all planning horizons n ≥ n 0 , for a suitable n 0 . These n 0 , often seen in these figures, depend on the strategy's gap, i.e., the limit of its n-season relative regret as n → ∞; the smaller the gap, the larger n 0 tends to be. It is noteworthy that relatively smaller gaps occur in specific cases: (a) for all inconsistent strategies, under extreme-low or extreme-high demand, perhaps because the optimal policy is then almost a fixed-price one; and (b) for strategy σ P , but only when its parametric model contains a good approximation to the demand vector. Regarding the strategies' computing cost, our approach sits mid-range among the alternatives we considered.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Appendix
We provide the results of 'Part 2' of our experiments and discussion thereof. Figures 5 and 6 help quantify the relative regret of σ F and σ U via the respective gaps (as predicted in Sect. 6.1). In the large-n case, n = 10 4 (right column), we see that for all T the (estimated) relative regret of σ U is nearly indistinguishable from the fixed-price gap 1 − V U /V ; and that of σ F is lower-bounded (fairly tightly) by the fluid gap, 1 − V F /V .
Since these gaps are essential to the (large-n) regret of these strategies, we examine them more carefully. In Fig. 7, the season length T (and thus the strength of demand) varies over a dense set of points, and reported as functions of T are the following for each of π U and π F : the optimal prices, i * for the former, and i 1 , i 2 for the latter; the slack; and the gap.
Effect of T on the gaps For π U , we see critical T points at which i * changes upward, entailing a downward jump in demand rate and inventory consumption, and a positive slack; the local maxima of the fixed-price gap are explained by corresponding local maxima of the slack. For π F , there exist critical T points at which the optimal solution changes; however, the presence of two prices in the solution means that the slack is usually zero.
Effects of inventory x and demand vector λ on the worst-in-T gaps For each of the 8 cases (sub-figures) in Fig. 7, we define the worst-in-T gaps ρ(π F ; λ, x) := max T (1 − V F (λ, x, T )/V (λ, x, T )) and ρ(π U ; λ, x) := max T (1 − V U (λ, x, T )/V (λ, x, T )). For each λ (row), ρ(π F ; λ, x) decreases with x. For each x (column), the worst (largest) ρ(π U ; λ, x) (and the largest slacks) occur at λ 1 ; the larger sparsity of this vector, where sparsity of λ is defined as max 1<i≤κ |λ i − λ i−1 |, may be the reason behind the larger slacks and gaps. Properties of policies π F and π U as functions of season length T (horizontal axis, logarithmic scale): optimal price indices (divided by (4κ) −1 for convenient scaling); the slack; and the gap. The ith row of figures corresponds to demand vector λ i . The left and right columns correspond to inventory x = 10 and x = 100 respectively. Symbols '+' and '•' mark the slack and gap of π F and π U , respectively, for the T values in Figs. 5 and 6