This section outlines the design of the field test including a description of the recruitment of the sites, the automation systems, DR events and analysis methods including DR baselines for estimating the peak demand reduction.
Sites
The project began with a recruitment phase that included the development of outreach materials for potential participants, surveys of existing electricity loads, and an assessment of 15-min electric load data for winter and summer periods. The first winter tests were conducted in the early months of 2009. The summer tests were conducted in summer 2009. Five commercial building sites were recruited with a negotiated memorandum of understanding that described the sequence of the field tests. We collected building systems descriptions using a common site audit format and collected mechanical and electrical drawings when available. Most of the sites provided control system trend logs relevant to the analysis of the DR strategies. Each site had archived data from electric meters. In one case we installed an electric meter for the duration of the project. Each site was offered $3,000 for setup to join the project and $1,000 for participating in each event in the winter and $2,000 for setup and $500 for participating in each event in the summer. Although the summer incentives were smaller, all participants from the winter tests took part in the summer tests.
Automation
The DR signals for this project were published on a DR automation web services server, available on the Internet using the OpenADR signals. Each of the five participating facilities monitored the DR signal using a Web services client application and automatically shed site-specific electrical loads when the proxy price increased. This project demonstrated use of the Open Automated Demand Response Communication Specification (version 1.0), which is designed to facilitate DR automation without human intervention (Piette et al. 2009). Each site was outfitted to receive price proxy signals (or the associated operational mode signals) by one of two methods. If the site can host an embedded client in the energy management control system (EMCS) software implementation is preferable. If no such controls were present a Client Logic Integrated Relay (CLIR) box would be installed (Piette et al. 2006). Each facility’s EMCS vendor was hired to program the load-shed or shift control strategy to respond to the increase in the price signal.
During the winter and summer test periods SCL system operators determined the event start and end times. The DR automation server (DRAS) was programmed to send the DR test notifications to each participant. Winter DR events started at 7 a.m. and ended at 10 a.m. The events were dispatched based on the minimum outside air temperature during the DR period. At the beginning of each week, DR events were scheduled for the coldest days of the week as predicted in weather forecasts. Summer DR events started at noon and ended at 5 p.m. The summer events were called when the forecast temperature exceeded 26.7 °C although one DR event was dispatched on a 25.6 °C day because the team thought that there would not be any warmer days during the period.
For day-ahead tests, participants received notifications at 3 p.m. previous day, and, for day-of events, participants received notifications 30 min prior to the event start time. There were a total of four test events for each season: three day-ahead tests and one day-of test. During the winter tests, the test days for each site did not coincide because sites were tested as soon as they were enabled so the team could capture the coldest mornings. During the summer tests, sites were enabled around the same time, so more sites participated in each test event.
Each site was provided a prioritized list of potential DR strategies for the DR events. The price signals can be changed to “normal,” “moderate,” or “high,” and the DR strategies can be tested as the price signals are changed. Commissioning this system entailed changing the price signals and observing the EMCS response. As soon as communication was established, the DRAS operator was notified so that the communications could be verified from the DRAS operator screen. Each site had a “mysite” automation page to allow facility managers monitor the status of signals coming into their control systems.
DR events and baselines
We called a total of 16 summer and winter DR events. Routine checking of the DRAS and client status along with the automated notifications when clients were offline meant that communication problems between the DRAS and clients or other issues related to client software/hardware were identified well in advance of DR events. The operator was responsible for monitoring the DRAS and status of all clients approximately every half hour to verify that there was no loss of communication between the DRAS and its clients. If the client went offline, the customer were notified immediately to resolve any problems. We collected historical and event day electric load data for each site as well as outside air temperature data from the National Oceanic and Atmospheric Administration. These data were used to develop the three baselines model further described below:
-
Outside air temperature regression (OATR) model,
-
The “three-in-ten” baseline model, and
-
The “average of similar days” baseline model.
The OATR baseline model is the most accurate, least biased model among the three for weather-sensitive buildings. However, collecting weather data from a site or a location close to a site is cumbersome and many electric utilities are unwilling to create weather-normalized baselines. Therefore, the three-in-ten baseline model, which uses average hourly load shape of the three highest energy-consuming days during the ten work days preceding the DR event of interest, is the baseline model preferred by utilities in California. Developing the three-in-ten baseline does not involve collecting weather data, which simplifies the development process. The demand savings estimates for most of the buildings that participated in the study are based on the baseline OATR model. The exception is one building, which did not have historical electricity use data. For that building for the first events the average-of-similar-days model was used based on as many non-DR days as were available. If the model predicts a lower baseline than the actual demand for any given 15-min or hourly period, this indicates negative demand savings. Negative demand savings are often found after a DR period as part of a “rebound” or recovery peak in which the HVAC system tries to bring the thermal zones back to normal conditions.
The evaluations performed include quantifying the demand savings in kilowatts at each site along with the savings in whole-building power reduction by percentage, and the demand intensity (in watts per square meter). The demand savings are calculated by subtracting the actual whole-building power from baseline demand. The demand savings percentage is defined as the percentage of savings in whole-building power. The demand-savings intensity (in watts per square meter) is the demand reduction (in watts) normalized by the building’s floor area (in square meters).
The model to calculate the summer afternoon demand reductions uses OATR with a scalar adjustment for the morning load (noted as OATR-Morning Adjusted, or OATR-MA in the graphics below). This methodology was utilized for the summer tests. However, for the winter tests, because the morning periods are when the Seattle DR events took place, a morning adjustment component was replaced and tested with an afternoon adjustment component because the afternoon periods capture and represent internal loads.
Outside air temperature regression model baseline
The electric consumption for the DR event period was subtracted from the baseline-modelled demand to derive an estimate of demand savings for each 15-min period. Previous research recommends a weather-sensitive baseline model with adjustments for morning load variations for accuracy (Goldberg and Agnew 2003). For the OATR baseline, a whole-building power baseline was estimated first using a regression model that assumes that whole-building power is linearly correlated with outside air temperature. The model is computed as shown in Eq. 1:
$$ {L_i}={a_i}+{b_i}{T_i} $$
(1)
where L
i
is the predicted 15-min interval electricity demand for time i from the previous non-DR event workdays. Depending on the time interval of the available weather data, T
i
is the hourly or 15-min interval outside air temperature at time i. The parameters a
i
and b
i
are generated from a linear regression of the input data for time i. Individual regression equations are developed for each 15-min, resulting in 96 regressions for the entire day (24 h/day, with four 15-min periods/h. Time i is from 0:00 to 23:45). To develop the baseline electricity loads for determining demand savings, 20 “non-demand response” days were selected. These 20 baseline days were non-weekend, non-holiday, Monday through Friday workdays. Input data were 15-min interval whole-building electricity demand and 15-min interval or hourly outside air temperature.
Some utilities have used a three-in-ten baseline for DR savings. The three-in-ten baseline is the average hourly load shape of the three highest energy-consuming days during the most recent ten work days (excluding holidays). The baseline algorithm for this project considers the site electricity consumption from 7 to 10 a.m. for the winter and noon to 5 pm in the summer when selecting the three days of highest consumption prior to a DR event. For commercial buildings, the OATR baseline is a more accurate and less biased baseline than the three-in-ten baseline (Coughlin et al. 2009). Figure 2 is an example showing the Seattle Municipal Tower’s participation in the March 3 DR event in 2009. The chart shows the actual whole-building power, the LBNL OATR baseline, and the 3/10 baseline. The vertical line at each baseline data point is the standard error of the regression estimate. The vertical lines at 7 and 10 a.m. identify the DR event period. On this day, the three-in-ten baseline is higher than the OATR baseline because there were cooler days during the previous 10 days that were used to develop the baseline.
An OATR baseline with adjustments (OAT_A) might be more accurate. In an OAT_A baseline, an adjustment factor (r
a
) is multiplied by each 15-min load. The factor r
a
is defined as the ratio of the actual to the predicted load during the 4 h in the afternoon preceding the winter DR event and 4 h in the morning prior to the summer DR event, as shown in Eq. 2.
$$ {r_a}={{{\sum\limits_{i=1}^n {{L_{a,i }}} }} \left/ {{\sum\limits_{i=1}^n {{L_{p,i }}} }} \right.} $$
(2)
Where r
a
is the adjustment factor, L
a,i
is the actual hourly average load on DR day at the hour’s start at i p.m., L
p,i
is the predicted load by baseline at the hour’s start at i p.m., and n is the number of hours which are used for adjustment (n = 4 for this analysis).
Average-of-similar-day baseline
For two sites, interval meters were installed 2 days before the test events and the average-of-similar-day baseline was used because of the lack of prior data. For these sites, available data were averaged to develop the baseline. As the events progressed, the average used to develop the baseline included nontest days.