1 Introduction

The last ten years have seen the rapid development of the concept of smart grid, and significant work in the area of demand side load management [10, 17]. With the increasingly large range of (renewable) energy sources and tariffs, optimization in energy management is more and more important. Koutsopoulos and Tassiulas [15] studied a type of optimization problem that is typical in this setting: an energy grid operator receives consumer requests with different power requirements, durations, and deadlines. The objective of the operator is to devise a scheduling policy that minimizes the grid operational cost, seen as a convex function of the total load, over a given time horizon. The authors showed that if power demands can be served preemptively then the allocation problem can be solved effectively, and if that is not the case then the problem is NP-hard. Arikiez et al. [3] studied a Micro-Grid scenario that generalizes Koutsopoulos et al’s model. A set of houses and a set of (renewable) energy generators is given, fully connected to each other and connected to a national electricity generator (NEG). Each house has a set of appliances which must be run a certain number of times in specified time periods (must-use constraints). Additionally some appliances may be used to control the temperature in (part of) the house (comfort constraints). The time horizon is finite and subdivided in constant length time intervals. The collection of such intervals is identified with an initial segment \(\mathcal{T} = \{1, 2, \ldots , \tau \}\) of the set of positive natural numbers (“0” often refers to a generic moment in time before the process of interest starts). The system state is only allowed to change, instantaneously, between successive time intervals. Hence we talk equivalently of “time (steps)” or “time intervals”. Some of the system settings are described by finite sequences defined over \(\mathcal T\). For instance each generator r has an available energy function: \(P_r(t)\) indicates how much electricity this generator can deliver during the t-th time (interval) and we assume that this stays constant during such interval. Also, a cost function \(\gamma _{r,h}(t)\) is defined for each house-generator pair (hr) which indicates the unit price at which house h can buy electricity from generator r at time t. The NEG is represented by a cost function \(\lambda _h(t)\) which indicates the unit price at which any house can buy electricity from NEG at time t. To avoid trivialities we assume that \(\gamma _{r,h}(t) < \lambda _h(t)\) for all rh,  and t.

The main purpose of this paper is to study the complexity of classes of optimization problems which can be defined within Arikiez’s et al. framework. For this reason, in the rest of the paper, we restrict to systems formed by a single house, connected to NEG and to a single generator (hence we drop indices h and r from our notations). In this setting P1 (equivalent to the non-preemptive case studied by Koutsopoulos and Tassiulas) is the problem of finding an appliance schedule compatible with a given set of must-use constraints which minimizes the energy cost for the house, assuming no temperature controlling appliance is present. After describing a few polynomial time cases, we show that P1 is actually NP-hard in the strong sense even if a certain amount of renewable energy is available free of charge. The main contribution of this paper, though, is the first theoretical analysis of a second, much richer, variant which we call P2. Here there is no limit on how many times each appliance is run, but the house has internal temperature constraints and appliances are air-conditioning (AC) units used to keep the internal temperature within such constraints. In the forthcoming sections we investigate the effect of energy cost variability as well as the type and distribution of the AC units on the problem complexity. If there are many units the problem is NP-hard, in the variable cost scenario. Therefore the most interesting case is that of an apartment with a single unit. If the device can operate at many different temperature states the problem is NP-hard. However if the AC unit only has a single operating mode then the problem can be solved efficiently. This is simple if the energy cost is fixed, however, if the cost varies, we found that the existence of an efficient algorithm solution depends crucially on the house thermal inertia. Our main algorithmic result on the problem at hand involves the study of the properties of a minimization version of Knapsack, which may be of independent interest.

Variants of the allocation problem considered here have been studied before [5, 9, 13, 16]. In fact research in domestic energy management, technologies for the smart grid, and home automation is thriving with hundreds of papers published every year and dozens of conferences and journals devoted to these topics. However it seems that most of the effort concentrated on finding feasible allocation heuristics, while relatively little [1] attention has been given to the scalability of such heuristics or the problems complexity. Some of these problems are related to machine scheduling [4] and bin packing [15]. But the way in which the appliances are used, their arrangement and different price strategies are specific features of the smart grid setting. Problem P2 is related to the minimum cost resource allocation studied in [7] and the capacitated covering problems of [6]. However our hardness proofs hold in simpler cases than those considered in the cited papers, and nobody seems to have considered exact polynomial time algorithms for non-trivial special cases.

In the next Section we focus on P1. We define the problem and discuss its complexity. In Sect. 3 we work on P2. We start by providing all relevant definitions. Then we analyze the problem’s complexity first in the case of many appliances (Sect. 3.1) then looking at single room single appliance systems (Sects. 3.2, 3.3). The main result of this paper is the design of a polynomial time strategy for the optimization problem P2 in the case of a single AC unit, with a single operating state, in a poorly isolated house. In all our complexity results if \(\varPi _1\) and \(\varPi _2\) are two computational problems then \(\varPi _1 \le \varPi _2\) will stand for the statement “\(\varPi _1\) is polynomial time reducible to \(\varPi _2\)” in the sense that there is a polynomial time algorithm translating instances of \(\varPi _1\) into instances of \(\varPi _2\) that preserve solvability (the reader is referred to [12] for all basic complexity theoretic definitions and notations).

2 Allocating “must use” appliances

In this section we focus on problem P1. The given house contains n appliances, identified by the integers \(1, \ldots , n\). The model presented in [3] is quite general but to simplify our presentation we restrict ourselves mainly to so called uniphase interruptible appliances: at each time step t each unit i is either “OFF” or it is “ON” and if it is “ON” it uses an amount of power equal to \(\alpha _i\). Each appliance’s state can be changed freely at any time. Given \(\tau \) consecutive time steps the goal is to run each appliance exactly once for a single time step, minimizing the total energy cost for the house. The problem admits a natural Linear Program formulation. The total amount of electricity needed at time t is \(\sum _{1\le i\le n}\alpha _i\cdot x_i(t)\) where, for each \(i \in \{1,\ldots , n\}\), \(x_i(t) =1\) (resp. \(x_i(t) =0\)) if appliance i is ON (resp. OFF) at time t. Electricity may come either from the NEG or from the local renewable power generator. Let G(t) (resp. L(t)) denote the amount of power taken from the generator (resp. from the NEG) at time t. Problem P1 is then described as follows (bold typeface symbols denote vectors with \(\tau \) components, so, for instance, \(\mathbf {P} = (P(1), \ldots , P(\tau ))\), etc and notations like “\(\mathbf {x} \cdot \mathbf {y}\)” have the usual algebraic interpretation):

$$\begin{aligned} \begin{array}{rclr} \min \ &{} &{} \mathbf {\lambda }^\mathrm{T} \cdot \mathbf {L} + \mathbf {\gamma }^\mathrm{T} \cdot \mathbf {G} &{} \text{ s.t. }\\ \mathbf {1}^\mathrm{T} \cdot \mathbf {x}_i &{} = &{} 1 &{} \forall i \in \{1, \ldots , n\} \\ \mathbf {G} &{} \le &{} \mathbf {P} &{} \\ \mathbf {L} + \mathbf {G} &{} = &{} \sum _{1\le i\le n}\alpha _i\cdot {\mathbf {x}}_i\\ \mathbf {x}_i &{} \in &{} \{0,1\}^{(\tau )} &{} \forall i \in \{1, \ldots , n\} \end{array} \end{aligned}$$

where the vector \(\mathbf {x}_i\) describes the state of appliance i at each time step, the first constraint forces \(x_i(t)\) to be one at a single point t, the second one forces the required amount of renewable power not to be larger than the total renewable power available, the third constraint is an energy balance one, and the last one restricts the range of the vectors \(\mathbf {x}_i\).

Fig. 1
figure 1

Multiphase appliance, with three phases each running for four time steps, using power \(\alpha _1\), \(\alpha _2\), and \(\alpha _3\) respectively

Note that if the number of appliances is fixed the set of feasible solutions for instances of P1 can be enumerated in time polynomial in \(\tau \). In fact this is true also if we required each appliance to be used an arbitrary, but fixed, number of times, or if the appliances had any of the more complex energy usage patterns defined in [3] (Fig. 1 gives an example of the energy requirements for a multiphase non-interruptible appliance.) Also, if the system has no renewable power generator then, again, the allocation is easy as we can simply allocate everything at a time for which \(\lambda (t)\) is minimal. Conversely, if the number of appliances is large the problem is NP-hard, even for \(\tau = 2\) [1, 3]. Here we strengthen such result. Arikiez et al. [3] evaluate the performances of an exact algorithm for P1 and observe that they degrade rapidly if n or \(\tau \) become large. However it is not clear whether a so called pseudo-polynomial [12, Chapter 4] algorithm may exist that allow P1 to be solved in time polynomial in the magnitude of the numbers involved in the problem instance (this has proved to be beneficial in practical contexts [2]). Our next result makes this rather unlikely.

The 3-partition problem is defined as follows (see [12]):

data: :

\(a_1, \ldots , a_{3m}\) positive integers adding up to mB such that each \(a_i\) satisfies \(B/4< a_i < B/2\).

solution: :

A partition of the given set of numbers into m blocks such that the sum of the elements in each block is equal to B.

It is well-known that 3-partition is NP-hard in the strong sense (see [12, p. 99]) and this in turns rules out the possibility of a pseudo-polynomial time algorithm for this problem (unless P=NP). We describe a reduction from 3-partition to the decision version of P1 that preserves strong NP-hardness. This leads to the following result.

Theorem 1

Problem P1 is NP-hard in the strong sense.

Proof

We show that a generic instance of 3-partition can be reduced to the decision version of P1 with a pseudo polynomial transformation (as defined in [11]). Let \(a_1, \ldots , a_{3 m}\) and \(B > 0\) be an instance of 3-partition. Define an instance of P1 by taking \(\tau = m\), and using 3m appliances with \(\alpha _i = a_i\) for each \(i \in \{1, \ldots , 3m\}\). Assume that there is a single renewable power generator. Let \(\gamma (t) = 0\), and \(P(t) = B\), for all \(t \in \mathcal{T}\). Set \(\lambda (t)\) to be some arbitrary fixed positive value.

The transformation preserves strong NP-hardness since the largest numerical value of the resulting instance is \(B < 4\cdot a_1\). It is easy to see that if there is partition in m blocks then there is an appliance allocation over \(\tau (=m)\) time steps that uses all renewable power available and costs nothing. Conversely, if there is no good partition then there must be a time step t in which we need more than P(t) energy (and hence we must pay for it, buying it from NEG). \(\square \)

3 Controlled temperature environments

So far we have looked at energy optimisation in a rather isolated environment: a house connected to several energy sources, has a number of appliances that consume energy and need to be scheduled in a given time window. In this section we change this in two ways. First, appliances do not correspond to tasks that must be executed at all costs. Second, the house sits in an environment that exchange heat with the building. In this context appliances should be thought of as air conditioning (AC) units used to control the interior temperature. This second framework is again inspired by the work in [3] but as in Sect. 2, our goal is to understand the problem features that affect its complexity and for this reason the model is presented in a somewhat simplified fashion.

The given house is split into a set \(\mathcal S\) of rooms, each having a thermostat for measuring the room temperature. The external environment affects the house in two ways. First, like before, the house can use the renewable energy generated by a single local micro-generation plant. Second, the house sits in an environment whose temperature \(\mathbf {T}_{out}\) is known in advance throughout the time window of interest. Each room \(s \in \mathcal S\) contains \(n_s > 0\) AC units. Each unit can either be OFF or making a certain contribution to the temperature of the room it is in. Assume that unit i in room s has a finite set of allowed temperatures contributions \(\varDelta T_{s,i} = \{T^{s,i}_1,\dots ,\ T^{s,i}_{n_{s,i}}\}\). Positive (negative) elements of \(\varDelta T_{s,i}\) correspond to the appliance being used as a heater (resp. cooler). The goal is to keep each room’s temperature in a predefined comfort interval \([T^s_{min}(t), T^s_{max}(t)]\) for all \(t \in \mathcal{T}\). Following [14], for each room s, we assume that the room temperature at time t, denoted by \(T_s(t)\), is linked to the outside temperature and the room’s units behaviour by the equation:

$$\begin{aligned} \forall t \in \mathcal{T} \qquad T_s(t) = \epsilon \cdot T_s(t-1) + (1-\epsilon )(T_{out}(t) + x_s(t)), \end{aligned}$$
(1)

where \(x_s(t)\) is the average of the contributions of all appliances in the room that are ON at time t and \(\epsilon \in [0, 1]\) is an inertia factor. A discussion on the validity of (1) is beyond the scope of this paper (the interested reader is referred to [14]). However it is instructive to understand how the formula works. The room temperature at time t is viewed as a linear combination of its temperature at time \(t-1\) and contributions from external sources, and that of the air conditioning appliances it contains. The coefficients of this combination depend on physical properties of the house, like its volume or the materials it is made of. Note that, if \(\epsilon = 1\), the internal environment is perfectly isolated from the outside and, in fact, we also have no way to control the internal temperature. On the other hand, if \(\epsilon = 0\), then there is no isolation and the system has no memory. At every moment in time the internal temperature is equal to the external one plus a value dependent on the contribution of the AC units. Therefore, to avoid trivialities, from now on we further assume that \(0<\epsilon <1\). The problem of interest admits a natural mixed integer linear programming formulation:

$$\begin{aligned} \begin{array}{rclr} \min \ &{} &{} \mathbf {\lambda }^\mathrm{T} \cdot \mathbf {L} + \mathbf {\gamma }^\mathrm{T} \cdot \mathbf {G} &{}\text{ s.t. } \\ \mathbf {G} &{} \le &{} \mathbf {P} &{} \\ \mathbf {L} + \mathbf {G} &{} = &{} \sum _{s \in \mathcal S} \sum _{i=1}^{n_s} \frac{1}{\eta ^s_i} \sum _{j=1}^{n_{s,i}} |T_j^{s,i}| \mathbf {y}_j^{s,i} \\ \mathbf {x}_s &{} = &{} \frac{1}{n_s} \sum _{i=1}^{n_s} \sum _{j=1}^{n_{s,i}} T_j^{s,i} \mathbf {y}_j^{s,i} &{} \forall s \in \mathcal{S}\\ \sum _{j=1}^{n_{s,i}} \mathbf {y}_j^{s,i} &{} \le &{} \mathbf {1} &{} \forall s \in \mathcal{S}, i \in \{1, \ldots , n_s\} \\ \mathbf {y}_j^{s,i} &{} \in &{} \{0,1\}^{(\tau )} &{} \forall s \in \mathcal{S}, i \in \{1, \ldots , n_s\}, j \in \{1, \ldots , n_{s,i}\} \\ \mathbf {E}_\epsilon \cdot \mathbf {x}_s &{} \ge &{} \mathbf {T}^s_{min} &{} \forall s \in \mathcal{S}\\ \mathbf {E}_\epsilon \cdot \mathbf {x}_s &{} \le &{} \mathbf {T}^s_{max} &{} \forall s \in \mathcal{S} \end{array} \end{aligned}$$

where \(y_j^{s,i}(t)\) is one (resp. zero) if the ith appliance in room s is in state j at time t. Binary variables \(y_j^{s,i}(t)\) are use to model the fact that each AC unit can only be in one of its \(n_{s,i}\) states at any given time step. Discrete variables \(x_s(t)\) model the temperature change in room s at time t. We are after an assignment to these variables that minimize the cost of the house appliances and keeps the temperature of room s in the comfort ranges \([T^s_{min}(t), T^s_{max}(t)]\) for every \(t \in \mathcal{T}\) and \(s \in \mathcal{S}\). The first four sets of constraints model the energy allocation process. The right-hand side of the second one is a vector describing the total amount of energy used by the house. Here \(\eta ^s_i > 0\) is the efficiency of unit i in room s, thus if the unit contributes \(T^{s,i}_j\) in a particular time step, then \(|T^{s,i}_j|/ \eta ^s_i\) is the amount of power needed by the unit during that step. The last three sets of constraints restrict the appliance choices at any given time to those that keep the room temperatures within the prescribed limits. Notice that the recursive contraints (1) have been solved and the internal temperature variables \(\mathbf {T}_s\) replaced by their values \((1-\epsilon ) \mathbf {F}_\epsilon [\mathbf {T}_{out} + \mathbf {x}_s] + \mathbf {T}_\epsilon \). Here \(\mathbf {E}_\epsilon \) and \(\mathbf {F}_\epsilon \) are the following \(\tau \times \tau \) matrices:

$$\begin{aligned} \mathbf {E}_\epsilon = \left( \begin{array}{c@{\quad }c@{\quad }c@{\quad }c@{\quad }c} \epsilon ^{\tau -1} &{} 0 &{} \dots &{} \dots &{} 0 \\ \epsilon ^{\tau -1} &{} \epsilon ^{\tau -2} &{} 0 &{} \dots &{} 0 \\ \vdots &{} \vdots &{} \ddots &{} \ddots &{} \vdots \\ \epsilon ^{\tau -1} &{} \epsilon ^{\tau -2} &{} \dots &{} \epsilon &{} 0 \\ \epsilon ^{\tau -1} &{} \epsilon ^{\tau -2} &{} \dots &{} \epsilon &{} 1. \end{array}\right) \qquad \mathbf {F}_\epsilon = \left( \begin{array}{c@{\quad }c@{\quad }c@{\quad }c} 1 &{} 0 &{} \dots &{} 0 \\ \epsilon &{} 1 &{} \ddots &{} \vdots \\ \vdots &{} \ddots &{} \ddots &{} 0 \\ \epsilon ^{\tau -1} &{} \dots &{} \epsilon &{} 1 \end{array}\right) . \end{aligned}$$

Also, the bounds on the products \(\mathbf {E}_\epsilon \cdot \mathbf {x}_s\), which, abusing notations, we still call \(\mathbf {T}^s_{min}\) and \(\mathbf {T}^s_{max}\), are, in fact, the only part of this model that depends on the initial internal temperature, the comfort limits, and the outside temperature. They stand for, respectively

$$\begin{aligned} \mathbf {\epsilon }^\tau \left( \frac{1}{1-\epsilon }\left( \mathbf {T}^s_{min} - \mathbf {T}^s_\epsilon \right) - \mathbf {F}_\epsilon \cdot \mathbf {T}_{out}\right) \quad \mathrm{and} \quad \mathbf {\epsilon }^\tau \left( \frac{1}{1-\epsilon }\left( \mathbf {T}^s_{max} - \mathbf {T}^s_\epsilon \right) - \mathbf {F}_\epsilon \cdot \mathbf {T}_{out}\right) \end{aligned}$$

with

$$\begin{aligned} (\mathbf {\epsilon }^\tau )^\mathrm{T} = (\epsilon ^{\tau -1}, \epsilon ^{\tau -2}, \ldots , 1), \end{aligned}$$
(2)

and \(\mathbf {T}^\mathrm{T}_\epsilon = T_s(0) \cdot (\epsilon , \epsilon ^{2}, \ldots , \epsilon ^\tau )\).

3.1 Many heathers

We first consider the case of a house containing many AC units. First assume that they are all part of a single room. Starting from [8], various authors have defined a natural minimization version of the classical Knapsack problem. For future reference denote by Minsack \((\mathbf {w},\mathbf {p},M)\) an instance of such problem involving item weights \(\mathbf {w}\), profits \(\mathbf {p}\) and knapsack bound M. This is exactly the problem we are interested in here where items correspond to appliances, and the allocation is over a single time step. Next, we will show that many single state heaters make P2 NP-hard even if the heaters are installed in different rooms in a variable energy price regime. In what follows Partition is the well-known NP-hard [12, Problem SP12] computational problem defined as follows:

Data: :

\(a_1,\ldots , a_n\) positive integers.

Solution: :

a subset \(I\subseteq \{1,\ldots , n\}\) such that \(\sum _{i\in I}a_i = \sum _{i \in \{1, \ldots , n\} {\setminus } I} a_i\).

Theorem 2

Partition \(\le \) P2.

Proof

Each instance of Partition is translated into an instance of P2 having \(\tau = 2\), n rooms, each equipped with a single heater having a single “ON” state. Value \(a_s\) becomes the amount of energy needed by the unit in room s to run in a single time step. Thus for each \(s \in \{1, \ldots , n\}\), \(T^s = \eta _s \cdot a_s\) (the house and the rooms thermostatic parameters \(\epsilon \) and \(\eta _s\) can be chosen arbitrarily). For each \(s \in \{1, \ldots , n\}\) we set \(T^s_{min}(1)\) is an arbitrary negative number and choose \(T^s_{min}(2)\) in the interval \((0,\epsilon \ T^s)\), Similarly we choose \(T^s_{max}(1) > \epsilon \ T^i\) and \(T^s_{max}(2) \in (T^s,(1+\epsilon )\ T^s)\). \(P(1) = P(2) = \frac{1}{2} \sum a_s\) and we assume that renewable energy costs nothing, whereas the energy from the grid has an arbitrary price \(\lambda \).

To complete the proof we need to show that if \(a_1, \ldots , a_n\) is a “YES” instance of Partition then the resulting instance of P2 can be solved with an allocation that has zero cost and if \(a_1, \ldots , a_n\) is a “NO” instance of Partition any solution of P2 will have a positive cost. This will follow from a key property shared by all solutions to the instances of P2 under consideration: each solution requires each of the n heaters to be switched on exactly once. To prove this notice that the temperature constraints simplify to

$$\begin{aligned} \mathbf {T}^s_{min} / T^i \le \left( \begin{array}{c@{\quad }c} \epsilon &{} 0 \\ \epsilon &{} 1\end{array} \right) \cdot \mathbf {y}^s \le \mathbf {T}^s_{max} / T^i \end{aligned}$$

and the product in the middle, depending on the value \(\mathbf {y}\), is equal to \(\left( \begin{array}{c} 0 \\ 0 \end{array} \right) \), \(\left( \begin{array}{c} \epsilon \\ \epsilon \end{array} \right) \), \(\left( \begin{array}{c} 0 \\ 1 \end{array} \right) \), or \(\left( \begin{array}{c} \epsilon \\ 1+\epsilon \end{array} \right) \). But only the two middle values (corresponding to \(\mathbf {y}^s\) having only one of the two components equal to one) satisfy the stated inequalities.

Suppose now that the given instance of Partition is a “YES” instance and let I be one of its solutions. Then scheduling all units in I to run at the same time step will result in a feasible solution for P2 with cost zero. Conversely if the given instance of Partition is a “NO” instance then any solution of P2 must have a positive cost. \(\square \)

The upshot of the analysis in this section is that P2 does not seem very easy to solve if the system contains a large number of AC units. In the rest of the paper we will concentrate on increasingly restricted versions of this problem.

3.2 One heater, a hard case

From now on we focus on a further restriction, which we call PS, obtained by assuming that the house only contains one room, that there is always enough renewable power, and that the single AC unit can be used in n different states, all of them providing a positive temperature contribution. We start our analysis by showing that PS is still NP-hard if the electricity price varies and n is large. The SubsetSum problem [12, Problem SP13] is defined as follows:

Data: :

\(a_1,\ldots , a_n\) and M positive integers.

Solution: :

a subset \(I\subseteq \{ 1,\ldots , n\}\) such that \(\sum _{i\in I}a_i = M\).

Theorem 3

SubsetSum \(\le \) PS.

Here PS denotes the decision version of the energy allocation problem at hand. The reduction translates each instance of SubsetSum to an instance of PS involving a single heater with many different (positive) temperature contributions. Let \(a_1, a_2, \ldots , a_n\) and M define an instance of Subset sum. We set \(\tau = n\). Furthermore set

  • \(\epsilon = \frac{1}{2}\min \left( \frac{1}{\tau \max \{a_i\}}, \frac{1}{M}\right) \);

  • the energy prices \(\gamma (t) = \epsilon ^{\tau -t}\) for all \(t \in \mathcal{T}\),

  • the heater temperature contributions \(T_j = a_j/\epsilon ^{\tau -j}\) for all \(j \in \{1, \ldots , n\}\), and \(\eta = 1\), and

  • finally set \(T_{min}(t) = 0\) for all \(t\in \mathcal{T} {\setminus } \{\tau \}\) and \(T_{min}(\tau ) = M\).

We argue that the SubsetSum instance is a “YES” instance if and only if the instance of PS admits a solution of cost M. First notice that with these choices we can easily see that the first \(\tau - 1\) temperature constraints are always verified since expressions

$$\begin{aligned} \sum _{1\le i \le t} \sum _{1 \le j \le n} T_j \cdot y_j(i)\cdot \epsilon ^{\tau - i} \qquad t \in \mathcal{T} {\setminus } \{\tau \} \end{aligned}$$

are non-negative. Denote by \(\Psi \) the problem’s objective function. The last temperature constraint actually implies that a “YES” instance of PS must have \(\Psi = M\). The next lemma states a property that can be used to complete the proof of Theorem 3.

Lemma 1

If \(\mathbf {y}\) is a solution of our reduced problem then for all \(t\in \mathcal{T}\) we have \(x(t) = T_t\) or \(x(t) = 0\).

Proof

We will show that for all \(t \in \mathcal{T}\) we have \(x(t) \not \in \{T_1,\ \dots ,\ T_{t-1}\}\) and then that \(x(t) \not \in \{T_{t+1},\ \dots ,\ T_{\tau }\}\). The first step just comes from the fact that if we have a t such as \(x(t) = T_l\) with \(l<t\) we would have \(\epsilon ^{\tau -t}\cdot x(t) = \epsilon ^{\tau -t}\cdot T_l = \epsilon ^{\tau -t}\cdot a_l/\epsilon ^{\tau -l} = a_l/\epsilon ^{t-l} \ge a_l/\epsilon \) and since \(\epsilon < 1/M\) this value is greater than M. So the objective function would also be greater that M. Now to the second step. Suppose we have found a solution such that \(\Psi = M\) and that for some \(t \in \{1,\ \dots ,\ n\}\) we have \(x(t) = T_l\) with \(l>t\). Let A be the set of these t, we decompose \(\Psi \) in two sums: \(\sum _{t \not \in A}\epsilon ^{\tau -t}\cdot x(t) + \sum _{t \in A}\epsilon ^{\tau -t}\cdot x(t)= M\). Note that since we already proved that for all \(t \in \mathcal{T}\) we have \(x(t) \not \in \{T_1,\ \dots ,\ T_{t-1}\}\) what we sum in the first sum are either 0s or \(a_t\)s and the first sum is an integer. Let’s take a closer look at the second sum: for all \(t \in A\) we have a \(l_t\) such that \(x(t) = T_{l_t}\) with \(l_t>t\) by definition, so we have:

$$\begin{aligned} \sum _{t \in A}\epsilon ^{\tau -t}\cdot x(t)&= \sum _{t \in A}\epsilon ^{\tau -t}\cdot \frac{a_{l_t}}{\epsilon ^{\tau -l_t}}&\text {by the choice of } x(t) \\&\le \sum _{t \in A}\epsilon \cdot a_{l_t}&\text {since } 0<\epsilon < 1 \text { and } l_t, t \text { are integers with } l_t > t \\&\le |A| \cdot \epsilon \cdot \max \{a_j\}&\text {by defition of max} \\&\le \tau \cdot \epsilon \cdot \max \{a_j\}&\text {by defition of } A \end{aligned}$$

hence, since \(\epsilon < 1/ \tau \max \{a_j\}\), this sum is less than 1, thus it is not an integer, and this in turns contradicts the fact that M and the first sum are integers. \(\square \)

3.3 Polynomial time algorithms

So far we have discovered that instances of PS involving variable energy costs and heaters with many temperature levels may be hard to solve. We complete this section by describing a variant of PS that can be solved in polynomial time. Consider the following problem:

$$\begin{aligned} \begin{array}{rclr} \min \ &{} &{} \mathbf {\gamma }^\mathrm{T} \cdot \mathbf {y} &{}\text{ s.t. } \\ \mathbf {y} &{} \in &{} \{0,1\}^{(\tau )} &{} \\ \mathbf {E}_\epsilon \cdot \mathbf {y} &{} \ge &{} \mathbf {T}_{min} &{} \end{array} \end{aligned}$$
figure a

If the electricity price is fixed (ie wlog \(\mathbf {\gamma } = \mathbf {1}\)), there is a single heater which can either be “OFF” or in a single “ON” state contributing some value T to the house temperature, then we claim that Algorithm Greedy can be used to find an optimal solution for the problem above in polynomial time. The idea is to solve the \(\tau \) constraints \(\mathbf {E}_\epsilon \cdot \mathbf {y} \ge \mathbf {T}_{min}\) iteratively, one by one. The algorithm while loop has the task of trying to build a vector \(\mathbf {y}\) that satisfy the temperature constraints. It is easy to argue (formally by induction on t) that after the t-th iteration of the main for loop, if we did not enter the if on line 8, then \(y(1), \ldots , y(t)\) is a minimal solution of the problem. Note that the left-hand side of the t-th inequality, \(\sum _{1\le x \le t} y(x) \cdot \epsilon ^{\tau - x}\), is just \(y(t) \cdot \epsilon ^{\tau -t}\) plus the left-hand side of the \(t-1\)-th inequality. Hence if the process reached stage t either \(\sum _{1\le x \le t-1} y(x) \cdot \epsilon ^{\tau - x} \ge T_{min}(t)\), in which case the same assignment will be picked up in the while loop in stage t or \(\sum _{1\le x \le t-1} y(x) \cdot \epsilon ^{\tau - x} < T_{min}(t)\), in which case a different assignment will be computed. The new assignment will satisfy all previous constraints because of the order in which variables y(x) are set (starting from the ones multiplying the largest monomials \(\epsilon ^{\tau -x}\)).

The analysis so far leaves the variable energy cost case open. While we are not able to answer this in full in the reminder of this section we present a polynomial time strategy to solve the problem above provided \(\epsilon \) is a positive constant smaller than 1 / 2. In what follows we denote this problem as PS\((\frac{1}{2})\). The result hinges on a particular property of the sequence \({\mathbf {\epsilon }}^\tau \), as defined in (2), and on the computational feasibility of a class of Knapsack instances involving sequences of this type. The main result of this section is the following:

Theorem 4

PS\((\frac{1}{2})\) can be solved in polynomial time.

A sequence of non-negative real numbers \(\mathbf {w} = (w(1), \ldots , w(a))\) is left independent if for all positive integers \(j\le a\) we have \(\sum _{i=1}^{j-1}w(i) < w(j)\). Note that if \(\mathbf {w}\) is left independent, then so is any of its prefixes \(\mathbf {w}_i = (w(1), \ldots , w(i))\), for \(i<a\). Also, it is easy to see that the sequence \(\mathbf {\epsilon }^\tau \) is left independent, provided \(\epsilon \in (0,1/2)\). Left independent sequences also have another important property.

Lemma 2

Minsack \((\mathbf {w},\mathbf {p},M)\) can be solved in polynomial time, for any profit vector \(\mathbf {p}\) and bound M, provided \(\mathbf {w}\) is left independent.

Proof

Algorithm EKP solves Minsack \((\mathbf {w},\mathbf {p},M)\) in polynomial time if \(\mathbf {w}\) is left independent. The process returns the empty set if \(M \le 0\), otherwise it searches for the greatest w(i) for \(i\le a\) lower than M and run recursively on i and \(M-w(i)\). It then compares \(\sum _{j\in S}p(j) + p(i)\) with the lowest p(j) with \(j>i\) and return \(S\cup \{i\}\) if the former was lower and \(\{j\}\) otherwise.

figure b

We prove that EKP works as promised by induction on a. The case \(a=1\) is simple: if \(w(1) \ge M > 0\) we return {1} and if \(M \le 0\) we return \(\emptyset \). Suppose now the algorithm works for 1, 2,..., \(a-1\). The first case of the main if doesn’t really need any comment: if \(M \le 0\) then \(\emptyset \) is the best solution and else we choose the j minimizing p(j) since all the w(j) work. To show that the algorithm works in the second case we just have to prove that \(S\cup \{i\}\) is the smallest solution without a \(j>i\). It’s quite simple to show: since we have \(\sum _{1\le j\le i-1}w(j)< w(i) < M\) if we don’t take a \(j>i\) then we are forced to take i and so we just have to find the minimal solution for Minsack \((\mathbf {w}_{i-1}, \mathbf {p}_{i-1},M-w(i))\) which is exactly S by induction hypothesis. \(\square \)

In what follows if \(S\subset \mathcal{T}\) let \(\mathbf {\chi }_S\in \{0, 1\}^\tau \) be the vector of size \(\tau \) such that \(\chi _S(t) = 1 \iff t\in S\). Also, \(\min S\) (resp. \(\max S\)) is the smallest (largest) element of S and if \(t_1\le t_2\), then \([t_1\ldots t_2]\) denotes the set \(\{t_1, t_1+1, \dots , t_2\}\). Finally for any \(S \subset \mathcal{T}\) let \(v(S) = \sum _{t\in S} \epsilon ^{\tau -t}\) and for any vector \(\mathbf {p}\) of size \(\tau \) denote \({\mathbf {p}}[S]\) the expression \(\sum _{t\in S} p(t)\).

Lemma 3

Let \(\mathbf {p}\) be a vector of size \(\tau \) and \(S_1,\, S_2\subset \mathcal{T}\).

  1. 1.

    If \(S_1,S_2\) are disjoint then \(v(S_1 \cup S_2) = v(S_1) + v(S_2)\) and \({\mathbf {p}}[S_1 \cup S_2] = {\mathbf {p}}[S_1] + {\mathbf {p}}[S_2]\).

  2. 2.

    The function v is injective.

  3. 3.

    \(v(S_1) < v(S_2)\) if and only if there exist \(t\in \mathcal{T} {\setminus } S_1\) and \(S\subset [1 \ldots t-1]\) such that \(S_2 = (S_1 {\setminus } [1 \ldots t]) \cup \{t\} \cup S\).

Proof

The first claim is trivial. Suppose \(S_1 \not = S_2\), we will show that \(v(S_1) \not = v(S_2)\). Let \(t = \max S_1 \varDelta S_2\), where \(S_1 \varDelta S_2\) is the symmetric difference of \(S_1\) and \(S_2\). Without loss of generality we may assume that \(t \in S_2\). Then by the left independence of \(\mathbf {\epsilon }^\tau \) we have \(v(S_1 \cap [1\ldots t]) < \epsilon ^{\tau -t} \le v(S_2 \cap [1\ldots t])\). This in turns implies that \(v(S_1 \cap [1\ldots t]) + v(S_1 {\setminus } [1\ldots t]) < v(S_2 \cap [1\ldots t]) + v(S_2 {\setminus } [1\ldots t])\), also using \(S_1 {\setminus } [1\ldots t] = S_2 {\setminus } [1\ldots t]\). The result now follows by the first part of the statement as \(v(S_1) < v(S_2)\).

We now argue about the final claim. If there exists a t as stated such that \(S_2 = (S_1 {{\setminus }} [1\ldots t]) \cup \{t\} \cup S\) then by left independence we have \(v(S_1\cap [1\ldots t]) < \epsilon ^{\tau -t}\) since \(t\not \in S_1\). Therefore:

$$\begin{aligned} \begin{aligned} v(S_1)&= v(S_1\cap [1 \ldots t]) + v(S_1 {\setminus } [1 \ldots t]) \quad \text { by the first statement}\\&< \epsilon ^{\tau -t} + v(S_1{\setminus } [1 \ldots t]) \\&\le v(S) + \epsilon ^{\tau -t} + v(S_1{\setminus } [1 \ldots t]) \quad \text { for any } S\subset [1\ldots t-1] \\&= v(S_2) \quad \text { again by the first statement} \end{aligned} \end{aligned}$$

Conversely, let \(t = \max S_1 \varDelta S_2\). Then it must be \(t\in S_2\) since otherwise by definition of the symmetric difference we would have \(S_1 {\setminus } [1\ldots t] = S_2 {\setminus } [1\ldots t]\) and so, by the same argument used to prove the other implication, we would have

$$\begin{aligned} \begin{aligned} v(S_2)&= v(S_2\cap [1\ldots t]) + v(S_2{\setminus } [1\ldots t]) = v(S_2\cap [1\ldots t-1]) + v(S_2{\setminus } [1\ldots t]) \\&< \epsilon ^{\tau -t} + v(S_2{\setminus } [1\ldots t]) = \epsilon ^{\tau -t} + v(S_1{\setminus } [1\ldots t]) \\&< v(S_1) \end{aligned} \end{aligned}$$

which is not possible by hypothesis. Hence \(S_2 = (S_1 {\setminus } [1\ldots t]) \cup \{t\} \cup S\) with \(S = S_2\cap [1\ldots t-1] \subset [1\ldots t-1]\) \(\square \)

If M is a positive real number, we call \(\epsilon \)-decomposition of M the subset \(S_{\epsilon }(M)\) of \(\mathcal{T}\) such that:

$$\begin{aligned} v(S_{\epsilon }(M)) = \min \{v(I) \mid I\subset \mathcal{T}\, \text {and }v(I) \ge M\}. \end{aligned}$$

Thus \(v(S_{\epsilon }(M))\) is the smallest number greater than M which can be written as a sum of powers of \(\epsilon ^\ell \) for \(\ell \in \mathcal{T}\). Note that \(S_{\epsilon }(M)\) can be computed as Minsack \((\mathbf {\epsilon }^\tau , \mathbf {\epsilon }^\tau , M)\).

We are now ready to complete the proof of Theorem 4.

figure c

We claim that algorithm Main takes \(\mathbf {T}_{min}\), the cost function \(\mathbf {\gamma }\) and \(\epsilon \) as arguments and returns an optimal solution to the given instance of PS\((\frac{1}{2})\) in time polynomial in \(\tau \). The process starts by creating the \(\epsilon \)-decompositions of the numbers \(T_{min}(t)\) and stores them in the array Sol (at the same time checking whether the given problem is trivially unfeasible). The second loop is the main part of the algorithm. In the tth iteration we focus on Sol[t] (assuming it is not empty). Thanks to the left independence of the rows of \({\mathbf {E}}_\epsilon \), instead of searching the minimal combination satisfying the first t constraints among all the possible combinations, we can concentrate on a set of less than \(\tau \) possibilities. Then we update the rest of Sol in order to simplify the search of solutions in the next iterations.

Note that all the set operations, computations of \({\mathbf {\gamma }}[S]\), v(S), and \(S_\epsilon (T_{min}(t))\) can be done (using algorithm EKP as a subroutine) in time \(O(\tau )\); all loops have at most \(\tau \) iterations and we have a maximum of two interlocked loops. Hence Algorithm Main runs in \(O(\tau ^3)\).

Next we argue that Algorithm Main returns a solution if and only if the given instance \(\mathcal I\) of PS\((\frac{1}{2})\) is feasible. This can be seen through the following chain of equivalences:

$$\begin{aligned} \begin{aligned}&\text {there is a solution} \\&\quad \iff \mathcal{T} \text { is a solution} \\&\quad \iff \forall t\in \mathcal{T} \quad T_{min}(t) \le \sum _{k\in [1\ldots t]}\epsilon ^{\tau -k} \\&\quad \iff \forall t\in \mathcal{T} \quad v(S_\epsilon (T_{min}(t))) \le \sum _{k\in [1\ldots t]}\epsilon ^{\tau -k} \\&\quad \quad \qquad \qquad \text { (by definition of }\epsilon \text {-decomposition)} \\&\quad \iff \text {Main returns a solution } \quad \text { (by left independence)}, \end{aligned} \end{aligned}$$

To complete the proof of the Theorem we need to argue that if S is returned by Main it is an optimal solution to \(\mathcal I\). In what follows, given an instance \(\mathcal I\) of PS\((\frac{1}{2})\), for \(t\in \mathcal{T}\), denote by \(\mathcal{I}_t\) the sub-problem of obtained using \((\mathbf {E}_\epsilon )_t\), the sub-matrix of \(\mathbf {E}_\epsilon \) formed by the first t rows and columns, and the prefixes \((\mathbf {T}_{min})_t\) and \(\mathbf {\gamma }_t\). Note that \(\mathcal{I}\) coincides with \(\mathcal{I}_\tau \). Also if \(1\le k < t\) and S is such that \((\mathbf {T}_{min})_t \le (\mathbf {E}_\epsilon )_t \cdot \mathbf {\chi }_S\) then \((\mathbf {T}_{min})_k \le (\mathbf {E}_\epsilon )_k \cdot \mathbf {\chi }_{S\cap [1\ldots k]}\), since, for each \(j \in \{1, \ldots , k\}\), only the first j elements of the jth row of \(\mathbf {E}_\epsilon \) are non-zero.

Assume that we ran Main and that the algorithm returned a solutions. We will show by induction on t that Sol[t] (seen as a set in \([1\ldots t]\)) is a solution to the sub-problem of size t. The case \(t=1\) is obvious. We suppose now that Sol[k] is a solution to the sub-problem of size k for \(1\le k < t\) and we want to show that Sol[t] is a solution to the sub-problem of size t.

First show that we have \((\mathbf {T}_{min})_t \le (\mathbf {E}_\epsilon )_t \cdot \mathbf {\chi }_{\mathsf{Sol}[t]}\). By construction we have \(\mathsf{Sol}[t] = \mathsf{Sol}[k-1] \cup \{k\} \cup (S_\epsilon (T_{min}(t)) {\setminus } [1\ldots k])\) for \(m \le k \le t\) with m being the minimum of Sol[t] at the begining of the tth iteration. Therefore, by induction hypothesis \((\mathbf {T}_{min})_{k-1} \le (\mathbf {E}_\epsilon )_{k-1} \cdot \mathbf {\chi }_{\mathsf{Sol}[k-1]} = (\mathbf {E}_\epsilon )_{k-1} \cdot \mathbf {\chi }_{ \mathsf{Sol}[t]\cap [1\ldots k-1] }\). Let’s now consider \(k \le j < t\). We must have \(v(\mathsf{Sol}[j]) < v(S_\epsilon (T_{min}(t)) \cap [m\ldots j])\) since otherwise we would have \(j <m\) because of the for loop on lines 17–20 in Algorithm Main, which isn’t possible since \(m \le k \le j\). So, by induction hypothesis, \(T_{min}(j) \le v(\mathsf{Sol}[j])< v(S_\epsilon (T_{min}(t)) \cap [m\ldots j]) \le v(S_\epsilon (T_{min}(t)) \cap [1\ldots j]) < v(\mathsf{Sol}[t] \cap [1\ldots j]\) by Lemma 3, since \(k \not \in S_\epsilon (T_{min}(t))\), and this for all \(j\in [k\ldots t-1]\). Finally by Lemma 3 we also have \(T_{min}(t) \le v(S_\epsilon (T_{min}(t))) \le v(\mathsf{Sol}[t])\).

Now take another feasible solution \(S\subset [1\ldots t]\). We will show that \({\mathbf {\gamma }}[\mathsf{Sol}[t]] \le {\mathbf {\gamma }}[S]\). Since \((\mathbf {T}_{min})_t \le (\mathbf {E}_\epsilon )_t \cdot \mathbf {\chi }_S\), we also have that \(v(S_\epsilon (T_{min}(t))) \le v(S)\) and so by Lemma 3 there exist \(j\in [1\ldots t] {\setminus } S_\epsilon (T_{min}(t)) \cup \min S_\epsilon (T_{min}(t))\) and \(S'\in [1\ldots j-1]\) such that \(S=(S_\epsilon (T_{min}(t)) {\setminus } [1\ldots j]) \cup \{j\} \cup S'\) (with \(j=\min S_\epsilon (T_{min}(t))\) in case of equality since v is injective). If \(j < m\) then we have \(S {\setminus } [1\ldots m-1] = S_\epsilon (T_{min}(t)) {\setminus } [1\ldots m-1]\) and so by induction hypothesis we have:

$$\begin{aligned} \begin{aligned} {\mathbf {\gamma }}[\mathsf{Sol}[m-1]]&\le {\mathbf {\gamma }}[S\cap [1\ldots m-1]] \\ {\mathbf {\gamma }}[\mathsf{Sol}[m-1]] + {\mathbf {\gamma }}[S_\epsilon (T_{min}(t)){\setminus } [1\ldots m-1]]&\le {\mathbf {\gamma }}[S\cap [1\ldots m-1]] + {\mathbf {\gamma }}[S {\setminus } [1\ldots m-1]] \\ {\mathbf {\gamma }}[\mathsf{Sol}[m-1] \cup \{m\} \cup (S_\epsilon (T_{min}(t)) {\setminus } [1\ldots m])]&\le {\mathbf {\gamma }}[S] \end{aligned} \end{aligned}$$

where the last inequality follows from Lemma 3.3 and since \(m\in S_\epsilon (T_{min}(t))\). This gives \({\mathbf {\gamma }}[\mathsf{Sol}[t]] \le {\mathbf {\gamma }}[S]\) since \({\mathbf {\gamma }}[\mathsf{Sol}[t]] \le {\mathbf {\gamma }}[\mathsf{Sol}[m-1]] \cup \{m\} \cup (S_\epsilon (T_{min}(t)) {\setminus } [1\ldots m]))\) by construction. Now if \(j \ge m\) similarly we have \(S {\setminus } [1\ldots j-1] = \{j\} \cup (S_\epsilon (T_{min}(t)) {\setminus } [1\ldots j])\) and so:

$$\begin{aligned} \begin{aligned} {\mathbf {\gamma }}[\mathsf{Sol}[j-1]]&\le {\mathbf {\gamma }}[S\cap [1\ldots j-1]] \\ {\mathbf {\gamma }}[\mathsf{Sol}[j-1]] + {\mathbf {\gamma }}[\{j\} \cup S_\epsilon (T_{min}(t)) {\setminus } [1\ldots j]]&\le {\mathbf {\gamma }}[S\cap [1\ldots m-1]] + {\mathbf {\gamma }}[S {\setminus } [1\ldots j-1]] \\ {\mathbf {\gamma }}[\mathsf{Sol}[j-1] \cup \{j\} \cup (S_\epsilon (T_{min}(t)) {\setminus } [1\ldots j])]&\le {\mathbf {\gamma }}[S], \end{aligned} \end{aligned}$$

which gives us again \({\mathbf {\gamma }}[\mathsf{Sol}[t]] \le {\mathbf {\gamma }}[S]\).

4 Conclusion

We studied a number of energy allocation optimization problems which may occur in domestic buildings. Two broad cases were considered: a “must-use” scenario where a set of appliances must be scheduled over a given time horizon, and a “comfort-aware” scenario where the appliances help to satisfy a predefined environment comfort level. In all cases we were interested in minimal energy cost solutions. Our main goal was to investigate the computational complexity of the relevant problems and characterize the border between polynomial-time tractability and NP-hardness. We studied the effect of the number of appliances on the complexity of problems of the first type, and that of the type and distribution of the AC units, as well as the energy price and the thermal properties of the given environment on problems of the second type. The main result of the paper is a proof that although it is NP-hard to schedule the operation of a single air-conditioning (AC) unit, working at various temperature levels in a variable energy price regime, there is a polynomial time algorithm for controlling one such device working at a single temperature level, for houses with low thermal inertia. The proof of such result uses the algorithmic properties of a variant of the well-known Knapsack problem.