Final version of the report

This commit is contained in:
Radu C. Martin 2021-06-25 11:27:25 +02:00
parent c213d3064e
commit 7def536787
14 changed files with 343 additions and 242 deletions

View file

@ -1,9 +1,27 @@
\section{Introduction} \section{Introduction}
% TODO: [Introduction] Control for building regulation Buildings are a major consumer of energy, with more than 25\% of the total
energy consumed in the EU coming from residential
buildings~\cite{tsemekiditzeiranakiAnalysisEUResidential2019}. Combined with a
steady increase in energy demand and stricter requirements on energy
efficiency~\cite{europeancommission.jointresearchcentre.EnergyConsumptionEnergy2018},
this amplifies the need for more accessible means of regulating energy usage of
new and existing buildings.
Data-driven methods of building identification and control prove very useful
through their ease of implementation, foregoing the need of more complex
physics-based models. On the flip side, additional attention is required to the
design of these control schemes, as the results could vary greatly from one
implementation to another.
% TODO: [Introduction] Benefits of data-driven methods Gaussian Processes have been previously used to model building dynamics, but
they are usually limited by a fixed computational budget. This limits the
approaches that can be taken for identification and update of said models.
Learning \acrshort{gp} models have also been previously used in the context of
autonomous racing cars, but there the Sparse \acrshort{gp} model was built on
top of a white-box model and only responsible for fitting the unmodeled
dynamics.
This project means to provide a further expansion of the use of black-box
% TODO: [Introduction] Big lines previous research and why \acrlong{gp} Models in the context of building control, through online learning
of building dynamics at new operating points as more data gets collected.

View file

@ -1,7 +1,7 @@
\section{Previous Research} \section{Previous Research}
With the increase in computational power and availability of data over time the With the increase in computational power and availability of data over time,
accessibility of data-driven methods for System Identification and Control has the accessibility of data-driven methods for System Identification and Control
also risen significantly. has also risen significantly.
The idea of using Gaussian Processes as regression models for control of dynamic The idea of using Gaussian Processes as regression models for control of dynamic
systems is not new, and has already been explored a number of times. A general systems is not new, and has already been explored a number of times. A general
@ -16,25 +16,29 @@ jainLearningControlUsing2018}, where the buildings are used for their heat
capacity in order to reduce the stress on energy providers during peak load capacity in order to reduce the stress on energy providers during peak load
times. times.
There are, however multiple limitations with these approaches. There are, however, multiple limitations with these approaches.
In~\cite{nghiemDatadrivenDemandResponse2017} the model is only identified once, In~\cite{nghiemDatadrivenDemandResponse2017} the model is only identified once,
ignoring changes in weather or plant parameters which could lead to different ignoring changes in weather or plant parameters, which could lead to different
dynamics. This is addressed in \cite{jainLearningControlUsing2018} by dynamics. This is addressed in \cite{jainLearningControlUsing2018} by
re-identifying the model every two weeks using new information. Another re-identifying the model every two weeks using new information. Another
limitation is that of the scalability of the \acrshort{gp}s, which become limitation is that of the scalability of the \acrshort{gp}s, which become
prohibitively expensive from a computational point of view when too much data is prohibitively expensive from a computational point of view when too much data is
added. added.
Outside of the context of building control, Sparse \acrlong{gp}es have been used
in autonomous racing in order to complement the physics-based model by fitting
the unmodeled dynamics of the
system~\cite{kabzanLearningBasedModelPredictive2019}.
The ability to learn the plant's behaviour in new regions is very helpful in The ability to learn the plant's behaviour in new regions is very helpful in
maintaining model performance over time as the behaviour of the plants starts maintaining model performance over time, as its behaviour starts deviating and
deviating and the original identified model goes further and further into the the original identified model goes further and further into the extrapolated
extrapolated regions. regions.
This project will therefore try to combine the use of online learning schemes This project will therefore try to combine the use of online learning schemes
with \acrlong{gp}es by using \acrlong{svgp}es, which provide means of using with \acrlong{gp}es by implementing \acrlong{svgp}es, which provide means of
\acrshort{gp} Models on larger datasets, and re-training the models every day at employing \acrshort{gp} Models on larger datasets, and re-training the models
midnight to include all the historically available data. every day at midnight to include all the historically available data.
\clearpage \clearpage

View file

@ -84,7 +84,7 @@ which, for the rest of the section, will be used in the abbreviated form:
"Training" a \acrshort{gp} is the process of finding the kernel parameters that "Training" a \acrshort{gp} is the process of finding the kernel parameters that
best explain the data. This is done by maximizing the probability density best explain the data. This is done by maximizing the probability density
function for the observations $y$i, also known as the marginal likelihood: function for the observations $y$, also known as the marginal likelihood:
\begin{equation}\label{eq:gp_likelihood} \begin{equation}\label{eq:gp_likelihood}
p(y) = \frac{1}{\sqrt{(2\pi)^{n}\det{\left(K + \sigma_n^2I\right)}}} p(y) = \frac{1}{\sqrt{(2\pi)^{n}\det{\left(K + \sigma_n^2I\right)}}}
@ -109,6 +109,9 @@ marginal likelihood:
\subsection{Prediction} \subsection{Prediction}
Given the proper covariance matrices $K$ and $K_*$, predictions on new points
can be made as follows:
\begin{equation} \begin{equation}
\begin{aligned} \begin{aligned}
\mathbf{f_*} = \mathbb{E}\left(f_*|X, \mathbf{y}, X_*\right) &= \mathbf{f_*} = \mathbb{E}\left(f_*|X, \mathbf{y}, X_*\right) &=
@ -117,9 +120,9 @@ marginal likelihood:
\end{aligned} \end{aligned}
\end{equation} \end{equation}
The extensions of these predictions to a non-zero mean \acrshort{gp} comes
Apply the zero mean \acrshort{gp} to the \textit{difference} between the naturally by applying the zero mean \acrshort{gp} to the \textit{difference}
observations and the fixed mean function: between the observations and the fixed mean function:
\begin{equation} \begin{equation}
\bar{\mathbf{f}}_* = \mathbf{m}(X_*) + K_*\left(K + \bar{\mathbf{f}}_* = \mathbf{m}(X_*) + K_*\left(K +
@ -310,9 +313,9 @@ lower bound of the log probability of observations.
Systems}\label{sec:gp_dynamical_system} Systems}\label{sec:gp_dynamical_system}
In the context of Dynamical Systems Identification and Control, Gaussian In the context of Dynamical Systems Identification and Control, Gaussian
Processes are used to represent multiple different model structures, ranging Processes are used to represent different model structures, ranging from state
from state space and \acrshort{nfir} structures, to the more complex space and \acrshort{nfir} structures, to the more complex \acrshort{narx},
\acrshort{narx}, \acrshort{noe} and \acrshort{narmax}. \acrshort{noe} and \acrshort{narmax}.
The general form of an \acrfull{narx} model is as follows: The general form of an \acrfull{narx} model is as follows:

View file

@ -4,7 +4,7 @@ In order to better analyze the different model training and update methods it
was decided to replace the physical \pdome\ building with a computer model. was decided to replace the physical \pdome\ building with a computer model.
This allows for faster-than-real-time simulations, as well as perfectly This allows for faster-than-real-time simulations, as well as perfectly
reproducing the weather conditions and building response for direct comparison reproducing the weather conditions and building response for direct comparison
of different control schemes over long periods of time. of different control schemes over longer periods of time.
The model is designed using the CARNOT The model is designed using the CARNOT
toolbox~\cite{lohmannEinfuehrungSoftwareMATLAB} for Simulink. It is based on the toolbox~\cite{lohmannEinfuehrungSoftwareMATLAB} for Simulink. It is based on the
@ -29,7 +29,7 @@ the choice of all the necessary model parameters.
\clearpage \clearpage
The Simulink model is then completed by adding a \textit{Weather Data File} Finally, the Simulink model is completed by adding a \textit{Weather Data File}
containing weather measurements for a whole year, and a \textit{Weather containing weather measurements for a whole year, and a \textit{Weather
Prediction} block responsible for sending weather predictions to the MPC.\@ The Prediction} block responsible for sending weather predictions to the MPC.\@ The
controller itself is defined in Python and is connected to Simulink via three controller itself is defined in Python and is connected to Simulink via three
@ -56,7 +56,8 @@ skylights are measured to be squares of edge 2.5m.
\begin{figure}[ht] \begin{figure}[ht]
\centering \centering
\includegraphics[width = 0.8\textwidth]{Images/google_maps_polydome_skylights} \vspace{-10pt}
\includegraphics[width = 0.75\textwidth]{Images/google_maps_polydome_skylights}
\caption{Google Maps Satellite view of the \pdome\ building} \caption{Google Maps Satellite view of the \pdome\ building}
\label{fig:Google_Maps_Skylights} \label{fig:Google_Maps_Skylights}
\end{figure} \end{figure}
@ -70,9 +71,11 @@ as reference, after which the following measurements have been done in
\citetitle{kimballGIMPGNUImage}~\cite{kimballGIMPGNUImage} using the \citetitle{kimballGIMPGNUImage}~\cite{kimballGIMPGNUImage} using the
\textit{Measure Tool}. \textit{Measure Tool}.
The chosen reference object is the \pdome\ HVAC system, the full description of The chosen reference object is the \pdome\ \acrshort{hvac} system, the full
which is presented in Section~\ref{sec:HVAC_parameters}, and which has a known description of which is presented in Section~\ref{sec:HVAC_parameters}, and
height of 2061mm \cite{aermecRoofTopManuelSelection}. which has a known height of 2061mm \cite{aermecRoofTopManuelSelection}.
\clearpage
\begin{figure}[ht] \begin{figure}[ht]
\centering \centering
@ -91,7 +94,7 @@ Table~\ref{tab:GIMP_measurements}:
\hline \hline
Object & Size [px] & Size[mm] & Size[m]\\ Object & Size [px] & Size[mm] & Size[m]\\
\hline \hline \hline \hline
HVAC height & 70 & 2100 & 2.1 \\ acrshort{hvac} height & 70 & 2100 & 2.1 \\
Building height & 230 & 6900 & 6.9 \\ Building height & 230 & 6900 & 6.9 \\
Stem wall & 45 & 1350 & 1.35 \\ Stem wall & 45 & 1350 & 1.35 \\
Dome height & 185 & 5550 & 5.55 \\ Dome height & 185 & 5550 & 5.55 \\
@ -121,7 +124,7 @@ the model:
The \pdome\ building has a structure that is mostly based on a dome shape, with The \pdome\ building has a structure that is mostly based on a dome shape, with
the difference that the dome portion of the building does not reach the ground, the difference that the dome portion of the building does not reach the ground,
but stands above it at a height of $\approx 1.35m$ (cf. but stands above it at a height of approximately $1.35$m (cf.
Table~\ref{tab:GIMP_measurements}), with the large side windows extending to the Table~\ref{tab:GIMP_measurements}), with the large side windows extending to the
ground and creating a \textit{stem wall} for the dome to sit on. ground and creating a \textit{stem wall} for the dome to sit on.
@ -177,7 +180,7 @@ therefore be calculated as:
The total volume of the building is then given as: The total volume of the building is then given as:
\begin{equation} \begin{equation}
V = V_d + V_s = \frac{1}{6} \pi h (3r^2 + h^2) + l_s^2 V = V_d + V_s = \frac{1}{6} \pi h (3r^2 + h^2) + l_s^2 * h_s
\end{equation} \end{equation}
Numerically, considering a dome diameter of 28m, a dome height of 5.55m and a stem Numerically, considering a dome diameter of 28m, a dome height of 5.55m and a stem
@ -187,7 +190,7 @@ wall edge of 25m, we get the approximate volume of the building:
\end{equation} \end{equation}
The value presented in Equation~\ref{eq:numerical_volume} is used directly in The value presented in Equation~\ref{eq:numerical_volume} is used directly in
the \textit{room\_node} of the CARNOT model (cf. the \textit{room\_node} element of the CARNOT model (cf.
Figure~\ref{fig:CARNOT_polydome}), as well as the calculation of the Air Figure~\ref{fig:CARNOT_polydome}), as well as the calculation of the Air
Exchange Rate, presented in Section~\ref{sec:Air_Exchange_Rate}. Exchange Rate, presented in Section~\ref{sec:Air_Exchange_Rate}.
@ -205,15 +208,15 @@ the chairs, tables, etc.\ but due to the restricted access to the building a
simpler approximation has been made. simpler approximation has been made.
\textcite{johraNumericalAnalysisImpact2017} present a methodology to model the \textcite{johraNumericalAnalysisImpact2017} present a methodology to model the
furniture in buildings for multiple different materials, as well as an furniture in buildings for different materials, as well as an \textit{equivalent
\textit{equivalent indoor content material} that is meant to approximate the indoor content material} that is meant to approximate the furniture content of
furniture content of an office building. These values for mass content, surface an office building. These values for mass content, surface area, material
area, material density and thermal conductivity have been taken as the basis for density and thermal conductivity have been taken as the basis for the \pdome\
the \pdome\ furniture content approximation, with the assumption that, since the furniture content approximation, with the assumption that, since the \pdome\ is
\pdome\ is still mostly empty, it has approximately a quarter of the furniture still mostly empty, it has approximately a quarter of the furniture present in a
present in a fully furnished office. fully furnished office.
The full set of furniture is therefore approximated in the CARNOT model as a The full set of furniture is, therefore, approximated in the CARNOT model as a
wall, with the numerical values for mass, surface, thickness and volume wall, with the numerical values for mass, surface, thickness and volume
presented below. presented below.
@ -222,11 +225,11 @@ presented below.
% 1/4 * 1.8 [m2/m2 of floor space] * 625 m2 surface = 140 m2 % 1/4 * 1.8 [m2/m2 of floor space] * 625 m2 surface = 140 m2
% 140 m2 = [7 20] m [height width] % 140 m2 = [7 20] m [height width]
The equivalent material is taken to have a surface of 1.8 $m^2$ per each $m^2$ The equivalent material is taken to have a surface of 1.8 $\text{m}^2$ per each
of floor area~\cite{johraNumericalAnalysisImpact2017}. With a floor area of 625 $\text{m}^2$ of floor area~\cite{johraNumericalAnalysisImpact2017}. With a floor
$m^2$, and assuming the furnishing of the building is a quarter that of a area of 625 $\text{m}^2$, and assuming the furnishing of the building is a
fully-furnished office, the \pdome\ furniture equivalent wall has a surface area quarter that of a fully-furnished office, the \pdome\ furniture equivalent wall
of: has a surface area of:
\begin{equation} \begin{equation}
S_f = \frac{1}{4} \cdot 1.8 \left[\frac{\text{m}^2}{\text{m}^2}\right] S_f = \frac{1}{4} \cdot 1.8 \left[\frac{\text{m}^2}{\text{m}^2}\right]
@ -238,7 +241,8 @@ of:
% 1/4 * 40 [kg/m2 of floor space] * 625 m2 surface = 6250 kg % 1/4 * 40 [kg/m2 of floor space] * 625 m2 surface = 6250 kg
The mass of the furniture equivalent wall is computed using the same The mass of the furniture equivalent wall is computed using the same
methodology, considering 40 kg of furniture mass per $m^2$ of floor surface. methodology, considering 40 kg of furniture mass per $\text{m}^2$ of floor
surface.
\begin{equation} \begin{equation}
M_f = \frac{1}{4} \cdot 40 \cdot 625\ \left[\text{m}^2\right] = 6250\ M_f = \frac{1}{4} \cdot 40 \cdot 625\ \left[\text{m}^2\right] = 6250\
@ -273,9 +277,9 @@ volume by the surface:
\subsection{Material properties} \subsection{Material properties}
In order to better simulate the behaviour of the real \pdome\ building it is In order to better simulate the behaviour of the real \pdome\ building, it is
necessary to approximate the building materials and their properties as close as necessary to approximate the building materials and their properties as close as
possible. This section goes into the details and arguments for the choice of possible. This section goes into details and arguments for the choice of
parameters for each of the CARNOT nodes' properties. parameters for each of the CARNOT nodes' properties.
\subsubsection{Windows} \subsubsection{Windows}
@ -293,7 +297,7 @@ models~\cite{WhatAreTypical2018}.
The US Energy Department states that the The US Energy Department states that the
typical U-factor values for modern window installations is in the range of 0.2 typical U-factor values for modern window installations is in the range of 0.2
--- 1.2 \(\frac{W}{m^2K}\)\cite{GuideEnergyEfficientWindows}. --- 1.2 \(\frac{W}{m^2K}\)~\cite{GuideEnergyEfficientWindows}.
The European flat glass association claims that the maximum allowable U-factor The European flat glass association claims that the maximum allowable U-factor
value for new window installations in the private sector buildings in value for new window installations in the private sector buildings in
@ -356,22 +360,22 @@ Table~\ref{tab:material_properties}:
\subsection{HVAC parameters}\label{sec:HVAC_parameters} \subsection{HVAC parameters}\label{sec:HVAC_parameters}
The \pdome\ is equipped with an \textit{AERMEC RTY-04} HVAC system. According to The \pdome\ is equipped with an \textit{AERMEC RTY-04} \acrshort{hvac} system.
the manufacturer's manual~\cite{aermecRoofTopManuelSelection}, this HVAC houses According to the manufacturer's manual~\cite{aermecRoofTopManuelSelection}, this
two compressors, of power 11.2 kW and 8.4 kW respectively, an external \acrshort{hvac} houses two compressors of power 11.2 kW and 8.4 kW respectively,
ventilator of power 1.67 kW, and a reflow ventilator of power 2 kW. The unit has an external ventilator of power 1.67 kW, and a reflow ventilator of power 2 kW.
a typical \acrlong{eer} (\acrshort{eer}, cooling efficiency) of 4.9 --- 5.1 and The unit has a typical \acrlong{eer} (\acrshort{eer}, cooling efficiency) of 4.9
a \acrlong{cop} (\acrshort{cop}, heating efficiency) of 5.0, for a maximum --- 5.1 and a \acrlong{cop} (\acrshort{cop}, heating efficiency) of 5.0, for a
cooling capacity of 64.2 kW. maximum cooling capacity of 64.2 kW.
One particularity of this HVAC unit is that during summer only one of the two One particularity of this \acrshort{hvac} unit is that during summer, only one
compressors are running. This results in a higher \acrlong{eer}, in the cases of the two compressors are running. This results in a higher \acrlong{eer}, in
where the full cooling capacity is not required. the cases where the full cooling capacity is not required.
\subsubsection*{Ventilation} \subsubsection*{Ventilation}
According to the manufacturer manual \cite{aermecRoofTopManuelSelection}, the According to the manufacturer manual \cite{aermecRoofTopManuelSelection}, the
HVAC unit's external fan has an air debit ranging between 4900 \acrshort{hvac} unit's external fan has an air debit ranging between 4900
$\text{m}^3/\text{h}$ and 7000 $\text{m}^3/\text{h}$. $\text{m}^3/\text{h}$ and 7000 $\text{m}^3/\text{h}$.
\subsubsection*{Air Exchange Rate}\label{sec:Air_Exchange_Rate} \subsubsection*{Air Exchange Rate}\label{sec:Air_Exchange_Rate}
@ -384,7 +388,8 @@ computed by dividing the air flow through the room by the room volume:
\text{Air exchange rate} = \frac{\text{Air flow}}{\text{Total volume}} \text{Air exchange rate} = \frac{\text{Air flow}}{\text{Total volume}}
\end{equation} \end{equation}
In the case of the \pdome\ and its HVAC, this results in an airflow range of: In the case of the \pdome\ and its \acrshort{hvac}, this results in an airflow
range of:
\begin{equation} \begin{equation}
\begin{aligned} \begin{aligned}
@ -402,7 +407,7 @@ would require more precise measurements to estimate.
\subsection{Validating against experimental data}\label{sec:CARNOT_experimental} \subsection{Validating against experimental data}\label{sec:CARNOT_experimental}
In order to confirm the validity of the model it is necessary to compare the In order to confirm the validity of the model, it is necessary to compare the
CARNOT models' behaviour against that of the real \pdome\ building. CARNOT models' behaviour against that of the real \pdome\ building.
Section~\ref{sec:CARNOT_expdata} presents the available experimental data, Section~\ref{sec:CARNOT_expdata} presents the available experimental data,
@ -421,8 +426,8 @@ The data has been collected in the time span of June to August 2017, and is
divided in seven different experiments, as presented in divided in seven different experiments, as presented in
Figure~\ref{tab:exp_dates}. The available measurements are the \textit{Outside Figure~\ref{tab:exp_dates}. The available measurements are the \textit{Outside
Temperature}, \textit{Solar Irradiation}, \textit{Electrical power consumption} Temperature}, \textit{Solar Irradiation}, \textit{Electrical power consumption}
of the HVAC, and two measurements of \textit{Inside Temperature} in different of the \acrshort{hvac}, and two measurements of \textit{Inside Temperature} in
parts of the room. different parts of the room.
\begin{table}[ht] \begin{table}[ht]
\centering \centering
@ -445,27 +450,29 @@ parts of the room.
\clearpage \clearpage
As mentioned previously, the external fan of the HVAC is constantly running. As mentioned previously, the external fan of the \acrshort{hvac} is constantly
This can be seen in Figure~\ref{fig:Polydome_electricity} as the electricity running. This can be seen in Figure~\ref{fig:Polydome_electricity} as the
consumption of the HVAC has a baseline of 1.67 kW of power consumption. electricity consumption of the \acrshort{hvac} has a baseline of 1.67 kW.
\begin{figure}[ht] \begin{figure}[ht]
\centering \centering
\includegraphics[width = \textwidth]{Plots/Fan_baseline.pdf} \includegraphics[width = \textwidth]{Plots/Fan_baseline.pdf}
\caption{Electrical Power consumption of the \pdome\ HVAC for Experiment 7} \caption{Electrical Power consumption of the \pdome\ \acrshort{hvac} for Experiment 7}
\label{fig:Polydome_electricity} \label{fig:Polydome_electricity}
\end{figure} \end{figure}
Figure~\ref{fig:Polydome_electricity} also gives an insight into the workings of Figure~\ref{fig:Polydome_electricity} also gives an insight into the workings of
the HVAC when it comes to the combination of the two available compressors. The the \acrshort{hvac} when it comes to the combination of the two available
instruction manual of the HVAC~\cite{aermecRoofTopManuelSelection} notes that in compressors. The instruction manual of the
summer only one of the compressors is running. This allows for a larger \acrshort{hvac}~\cite{aermecRoofTopManuelSelection} notes that in summer only
\acrshort{eer} value and thus better performance. We can see that this is the one of the compressors is running. This allows for a larger \acrshort{eer} value
case for most of the experiment, where the power consumption caps at around 6 and thus better performance. We can see that this is the case for most of the
kW. There are, however, moments during the first part of the experiment where experiment, where the power consumption caps at around 6~kW. There are, however,
the power momentarily peaks over the 6 kW limit, and goes as high as around 9 moments during the first part of the experiment where the power momentarily
kW. This most probably happens when the HVAC decides that the difference between peaks over the 6~kW limit, and goes as high as around 9~kW. This most probably
the set point temperature and the actual measured values is too large. happens when the \acrshort{hvac} decides that the difference between the set
point temperature and the actual measured values is too large to compensate with
a single compressor.
Figure~\ref{fig:Polydome_exp7_settemp} presents the values of the set point Figure~\ref{fig:Polydome_exp7_settemp} presents the values of the set point
temperature and the measured internal temperature. temperature and the measured internal temperature.
@ -473,7 +480,7 @@ temperature and the measured internal temperature.
\begin{figure}[ht] \begin{figure}[ht]
\centering \centering
\includegraphics[width = \textwidth]{Plots/Exp_settemp.pdf} \includegraphics[width = \textwidth]{Plots/Exp_settemp.pdf}
\caption{Measured vs set point temperature of the HVAC for Experiment 7} \caption{Measured vs set point temperature of the \acrshort{hvac} for Experiment 7}
\label{fig:Polydome_exp7_settemp} \label{fig:Polydome_exp7_settemp}
\end{figure} \end{figure}
@ -484,8 +491,8 @@ beginning of Experiment 7, as well as the majority of the other experiments, the
set point temperature is the value that gets changed in order to excite the set point temperature is the value that gets changed in order to excite the
system, and since the \acrshort{hvac}'s controller is on during identification, system, and since the \acrshort{hvac}'s controller is on during identification,
it will oscillate between using one or two compressors. Lastly, it is possible it will oscillate between using one or two compressors. Lastly, it is possible
to notice that the HVAC is not turned on during the night, with the exception of to notice that the \acrshort{hvac} is not turned on during the night, with the
the external fan, which runs continuously. exception of the external fan, which continues running.
\subsubsection{The CARNOT WDB weather data format}\label{sec:CARNOT_WDB} \subsubsection{The CARNOT WDB weather data format}\label{sec:CARNOT_WDB}
@ -499,12 +506,12 @@ pressure, wind speed and direction, etc. A detailed overview of each
measurement necessary for a simulation is given in the CARNOT user measurement necessary for a simulation is given in the CARNOT user
manual~\cite{CARNOTManual}. manual~\cite{CARNOTManual}.
In order to compare the CARNOT model's performance to that of the real \pdome\ In order to compare the CARNOT model's performance to that of the real \pdome\,
it is necessary to simulate the CARNOT model under the same set of conditions as it is necessary to simulate the CARNOT model under the same set of conditions as
the ones present during the experimental data collection. In order to do this, the ones present during the experimental data collection. In order to do this,
all the missing values that are required by the simulation have to be filled. In all the missing values that are required by the simulation have to be filled. In
some cases, such as the sun angles it is possible to compute the exact values, some cases, such as the sun angles it is possible to compute the exact values,
but in other cases the real data is not available, which means that is has to be but in other cases the real data is not available, which means that it has to be
inferred from the available data. inferred from the available data.
The information on the zenith and azimuth solar angles can be computed exactly The information on the zenith and azimuth solar angles can be computed exactly
@ -514,7 +521,7 @@ information available, the zenith, azimuth angles, as well as the \acrfull{aoi}
are computed using the Python pvlib are computed using the Python pvlib
library~\cite{f.holmgrenPvlibPythonPython2018}. library~\cite{f.holmgrenPvlibPythonPython2018}.
As opposed to the solar angles which can be computed exactly from the available As opposed to the solar angles, which can be computed exactly from the available
information, the Solar Radiation Components (DHI and DNI) have to be estimated information, the Solar Radiation Components (DHI and DNI) have to be estimated
from the available measurements of GHI, zenith angles (Z) and datetime from the available measurements of GHI, zenith angles (Z) and datetime
information. \textcite{erbsEstimationDiffuseRadiation1982} present an empirical information. \textcite{erbsEstimationDiffuseRadiation1982} present an empirical
@ -535,33 +542,33 @@ are computed using the Python pvlib.
The values that cannot be either calculated or approximated from the available The values that cannot be either calculated or approximated from the available
data, such as the precipitation, wind direction, incidence angles in place of data, such as the precipitation, wind direction, incidence angles in place of
vertical and main/secondary surface axis have been replaced with the default vertical and main/secondary surface axis, have been replaced with the default
CARNOT placeholder value of -9999. The relative humidity, cloud index, pressure CARNOT placeholder value of -9999. The relative humidity, cloud index, pressure
and wind speed have been fixed to 50\%, 0.5, 96300 Pa, 0 $\text{m}/\text{s}$ and wind speed have been fixed to 50\%, 0.5, 96300 Pa, 0 $\text{m}/\text{s}$
respectively. respectively.
\subsubsection{\pdome\ and CARNOT model comparison}\label{sec:CARNOT_comparison} \subsubsection{\pdome\ and CARNOT model comparison}\label{sec:CARNOT_comparison}
With the WDB data compiled, we can now turn to simulating the CARNOT model and With the \acrshort{wdb} data compiled, we can now turn to simulating the CARNOT
compare its behaviour to that of the real \pdome\ building. model and compare its behaviour to that of the real \pdome\ building.
Unfortunately, one crucial piece of information is missing: the amount of heat Unfortunately, one crucial piece of information is still missing: the amount of
the HVAC either pumps in or takes out of the building at any point in time. This heat that the \acrshort{hvac} either pumps in or takes out of the building at
value could be approximated from the information of electrical power consumption any point in time. This value could be approximated from the information of
and the EER, COP values given that it is known if the HVAC is in either heating electrical power consumption and the \acrshort{eer}/\acrshort{cop} values given
or cooling mode. that it is known if the \acrshort{hvac} is in either heating or cooling mode.
This information lacking in the existing experimental datasets, the heat This information lacking in the existing experimental datasets, the heat
supplied/ taken out of the system has been inferred from the electrical energy supplied/ taken out of the system has been inferred from the electrical energy
use, measured building temperature and HVAC temperature set point, with the use, measured building temperature and \acrshort{hvac} temperature set point,
assumption that the HVAC is in cooling mode whenever the measurements are with the assumption that the \acrshort{hvac} is in cooling mode whenever the
higher than the set point temperature, and is in heating mode otherwise. As it measurements are higher than the set point temperature, and is in heating mode
can already be seen in Figure~\ref{fig:Polydome_exp7_settemp}, this is a very otherwise. As it can already be seen in Figure~\ref{fig:Polydome_exp7_settemp},
strong assumption, that is not necessarily always correct. It works well when this is a very strong assumption, that is not necessarily always correct. It
the measurements are very different from the set point, as is the case in the works well when the measurements are very different from the set point, as is
first part of the experiment, but this assumption is false for the second part the case in the first part of the experiment, but this assumption is false for
of the experiment, where the set point temperature remains fixed and it is purely the second part of the experiment, where the set point temperature remains fixed
the HVAC's job to regulate the temperature. and it is purely the \acrshort{hvac}'s job to regulate the temperature.
\begin{figure}[ht] \begin{figure}[ht]
\centering \centering
@ -575,23 +582,23 @@ the HVAC's job to regulate the temperature.
The results of the seven simulations are presented in The results of the seven simulations are presented in
Figure~\ref{fig:CARNOT_simulation_validation}. Overall, the simulated Figure~\ref{fig:CARNOT_simulation_validation}. Overall, the simulated
temperature has the same behaviour as the real \pdome\ measurements. A more temperature has the same behaviour as the real \pdome\ measurements. A more
detailed inspection shows that for most of the experiments the simulated detailed inspection shows that for most of the experiments, the simulated
temperature is much more volatile than the true measurements. This could be due temperature is much more volatile than the true measurements. This could be due
to an overestimated value of the Air Exchange Rate, underestimated amount of to an overestimated value of the Air Exchange Rate, underestimated amount of
furniture in the building, or, more probably, miscalculation of the HVAC's furniture in the building or, more probably, miscalculation of the
heating/cooling mode. Of note is the large difference in behaviour for the \acrshort{hvac}'s heating/cooling mode. Of note is the large difference in
Experiments 5 and 6. In fact, for these experiments, the values for the behaviour for the Experiments 5 and 6. In fact, for these experiments, the
electrical power consumption greatly differ in shape from the ones presented in values for the electrical power consumption greatly differ in shape from the
the other datasets, which could potentially mean erroneous measurements, or some ones presented in the other datasets, which could potentially mean erroneous
other underlying problem with the data. measurements, or some other underlying problem with the data.
Finally, it is possible to conclude that the CARNOT building behaves comparably Finally, it is possible to conclude that the CARNOT building behaves comparably
to the real \pdome\, even if not perfectly simulates its behaviour. These to the real \pdome\, even if not perfectly simulates its behaviour. These
differences could come from multiple factors, missing information that had to differences could come from multiple factors --- missing information that had
be inferred and/or approximated, such as the Air Exchange Ratio, the heat to be inferred and/or approximated, such as the Air Exchange Ratio, the heat
provided/extracted, the amount of furniture in the building, the overall shape provided/extracted, the amount of furniture in the building, the overall shape
and size of the building, as well as possibly errors in the experimental data and size of the building, as well as possibly errors in the experimental data
used for validation. A more detailed analysis of the building parameters would used for validation. A more detailed analysis of the building parameters would
have to be done in order to find the reason and eliminate these discrepancies. have to be done in order to find the reasons and eliminate these discrepancies.
\clearpage \clearpage

View file

@ -16,12 +16,12 @@ model hyperparameters: the number of regressors, the number of autoregressive
lags for each class of inputs, the shape of the covariance function have to be lags for each class of inputs, the shape of the covariance function have to be
taken into account when designing a \acrshort{gp} model. These choices have taken into account when designing a \acrshort{gp} model. These choices have
direct influence on the resulting model behaviour and where it can be direct influence on the resulting model behaviour and where it can be
generalized, as well as indirect influence in the form of more expensive generalized, as well as indirect influence in the form of more time consuming
computations in the case of larger number of regressors and more complex kernel computations in the case of larger number of regressors and more complex kernel
functions. functions.
As described in Section~\ref{sec:gp_dynamical_system}, for the purpose of this As described in Section~\ref{sec:gp_dynamical_system}, for the purpose of this
project the \acrlong{gp} model will be trained using the \acrshort{narx} project, the \acrlong{gp} model will be trained using the \acrshort{narx}
structure. This already presents an important choice in the selection of structure. This already presents an important choice in the selection of
regressors and their respective autoregressive lags. regressors and their respective autoregressive lags.
@ -31,14 +31,14 @@ defined in Section~\ref{sec:mpc_problem}, where the goal is tracking as close as
possible the inside temperature of the building. possible the inside temperature of the building.
The input of the \acrshort{gp} model coincides with the input of the CARNOT The input of the \acrshort{gp} model coincides with the input of the CARNOT
building, namely the \textit{power} passed to the idealized \acrshort{hvac}, building, namely the \textit{heat} passed to the idealized \acrshort{hvac},
which is held constant during the complete duration of a step. which is held constant at each step.
As for the exogenous inputs the choice turned out to be more complex. The CARNOT As for the exogenous inputs the choice turned out to be more complex. The CARNOT
WDB format (cf. Section~\ref{sec:CARNOT_WDB}) consists of information of all the \acrshort{wdb} format (cf. Section~\ref{sec:CARNOT_WDB}) consists of information
solar angles, the different components of solar radiation, wind speed and of all the solar angles, the different components of solar radiation, wind speed
direction, temperature, precipitation, etc. All of this information is required and direction, temperature, precipitation, etc. All of this information is
in order for CARNOT's proper functioning. required in order for CARNOT's proper functioning.
Including all of this information into the \acrshort{gp}s exogenous inputs would Including all of this information into the \acrshort{gp}s exogenous inputs would
come with a few downsides. First, depending on the number of lags chosen for the come with a few downsides. First, depending on the number of lags chosen for the
@ -57,10 +57,10 @@ measurement of the outside temperature. This would also be a limitation when
getting the weather predictions for the next steps during real-world getting the weather predictions for the next steps during real-world
experiments. experiments.
Last, while very verbose information such as the solar angles and the components Last, while very verbose information, such as the solar angles and the components
of the solar radiation is very useful for CARNOT, which simulated each node of the solar radiation is very useful for CARNOT which simulates each node
individually, knowing their absolute positions, this information would not individually knowing their absolute positions, this information would not
always benefit the \acrshort{gp} model, at least not comparably to the always benefit the \acrshort{gp} model at least not comparably to the
additional computational complexity. additional computational complexity.
For the exogenous inputs the choice has therefore been made to take the For the exogenous inputs the choice has therefore been made to take the
@ -70,7 +70,7 @@ For the exogenous inputs the choice has therefore been made to take the
The covariance matrix is an important choice when creating the \acrshort{gp}. A The covariance matrix is an important choice when creating the \acrshort{gp}. A
properly chosen kernel can impose a prior desired behaviour on the properly chosen kernel can impose a prior desired behaviour on the
\acrshort{gp} such as continuity of the function an its derivatives, \acrshort{gp} such as continuity of the function and its derivatives,
periodicity, linearity, etc. On the flip side, choosing the wrong kernel can periodicity, linearity, etc. On the flip side, choosing the wrong kernel can
make computations more expensive, require more data to learn the proper make computations more expensive, require more data to learn the proper
behaviour or outright be numerically unstable and/or give erroneous predictions. behaviour or outright be numerically unstable and/or give erroneous predictions.
@ -87,7 +87,7 @@ Kernel~\cite{pleweSupervisoryModelPredictive2020}, a combination of
Kernel~\cite{jainLearningControlUsing2018}, Squared Exponential Kernel and Kernel~\cite{jainLearningControlUsing2018}, Squared Exponential Kernel and
Kernels from the M\`atern family~\cite{massagrayThermalBuildingModelling2016}. Kernels from the M\`atern family~\cite{massagrayThermalBuildingModelling2016}.
For the purpose of this project the choice has been made to use the For the purpose of this project, the choice has been made to use the
\textit{\acrlong{se} Kernel}, as it provides a very good balance of versatility \textit{\acrlong{se} Kernel}, as it provides a very good balance of versatility
and computational complexity for the modelling of the CARNOT building. and computational complexity for the modelling of the CARNOT building.
@ -117,7 +117,7 @@ three lengthscales apart.
From Table~\ref{tab:se_correlation} is can be seen that at 3 lengthscales apart, From Table~\ref{tab:se_correlation} is can be seen that at 3 lengthscales apart,
the inputs are already almost uncorrelated. In order to better visualize this the inputs are already almost uncorrelated. In order to better visualize this
difference the value of relative lengthscale importance is introduced: difference the value of \textit{relative lengthscale importance} is introduced:
\begin{equation} \begin{equation}
\lambda = \frac{1}{\sqrt{l}} \lambda = \frac{1}{\sqrt{l}}
@ -171,17 +171,19 @@ the past inputs, with the exception of the models with very high variance, where
the relative importances stay almost constant across all the inputs. For the the relative importances stay almost constant across all the inputs. For the
exogenous inputs, the outside temperature ($w2$) is generally more important exogenous inputs, the outside temperature ($w2$) is generally more important
than the solar irradiation ($w1$). In the case of more autoregressive lags for than the solar irradiation ($w1$). In the case of more autoregressive lags for
the exogenous inputs, the more recent information is usually more important, the exogenous inputs the more recent information is usually more important in
with a few exceptions {\color{red} Continue this sentence after considering the the case of the solar irradiation, while the second-to-last measurement is
2/1/3 classical GP model} preffered for the outside temperature.
% TODO: [Hyperparameters] Classical GP parameters choice For the classical \acrshort{gp} model the appropriate choice of lags would be
$l_u = 1$ and $l_y = 3$ with $l_w$ taking the values of either 1, 2 or 3,
depending on the results of further analysis.
As for the case of the \acrlong{svgp}, the results for the classical As for the case of the \acrlong{svgp}, the results for the classical
\acrshort{gp} (cf. Table~\ref{tab:GP_hyperparameters}) are not necessarily \acrshort{gp} (cf. Table~\ref{tab:GP_hyperparameters}) are not necessarily
representative of the relationships between the regressors of the representative of the relationships between the regressors of the
\acrshort{svgp} model due to the fact that the dataset used for training is \acrshort{svgp} model, due to the fact that the dataset used for training is
composed of the \textit{inducing variables}, which are not the real data, but a composed of the \textit{inducing variables}, which are not the real data, but a
set of parameters chosen by the training algorithm in a way to best generate the set of parameters chosen by the training algorithm in a way to best generate the
original data. original data.
@ -249,7 +251,7 @@ suggests, it computes the root of the mean squared error:
\end{equation} \end{equation}
This performance metric is very useful when training a model whose goal is This performance metric is very useful when training a model whose goal is
solely to minimize the difference between the values measured, and the ones solely to minimize the difference between the measured values, and the ones
predicted by the model. predicted by the model.
A variant of the \acrshort{mse} is the \acrfull{smse}, which normalizes the A variant of the \acrshort{mse} is the \acrfull{smse}, which normalizes the
@ -263,8 +265,8 @@ A variant of the \acrshort{mse} is the \acrfull{smse}, which normalizes the
While the \acrshort{rmse} and the \acrshort{smse} are very good at ensuring the While the \acrshort{rmse} and the \acrshort{smse} are very good at ensuring the
predicted mean value of the Gaussian Process is close to the measured values of predicted mean value of the Gaussian Process is close to the measured values of
the validation dataset, the confidence of the Gaussian Process prediction is the validation dataset, the confidence of the Gaussian Process prediction is
completely ignored. In this case two models predicting the same mean values, but completely ignored. In this case, two models predicting the same mean values but
having very different confidence intervals would be equivalent according to these having very different confidence intervals, would be equivalent according to these
performance metrics. performance metrics.
The \acrfull{lpd} is a performance metric which takes into account not only the The \acrfull{lpd} is a performance metric which takes into account not only the
@ -336,15 +338,15 @@ overconfident, either due to the very large kernel variance parameter, or the
specific lengthscales combinations. The model with the best specific lengthscales combinations. The model with the best
\acrshort{rmse}/\acrshort{smse} metrics \model{1}{2}{3} had very bad \acrshort{rmse}/\acrshort{smse} metrics \model{1}{2}{3} had very bad
\acrshort{msll} and \acrshort{lpd} metrics, as well as by far the largest \acrshort{msll} and \acrshort{lpd} metrics, as well as by far the largest
variance of all the combinations. On the contrary the \model{3}{1}{3} model has variance of all the combinations. On the contrary, the \model{3}{1}{3} model has
the best \acrshort{msll} and \acrshort{lpd} performance, while still maintaining the best \acrshort{msll} and \acrshort{lpd} performance, while still maintaining
small \acrshort{rmse} and \acrshort{smse} values. The inconvenience of this set small \acrshort{rmse} and \acrshort{smse} values. The inconvenience of this set
of lags is the large number of regressors, which leads to much more expensive of lags is the large number of regressors, which leads to much more expensive
computations. Other good choices for the combinations of lags are computations. Other good choices for the combinations of lags are
\model{2}{1}{3} and \model{1}{1}{3}, which have good performance on all four \model{2}{1}{3} and \model{1}{1}{3}, which have good performance on all four
metrics, as well as being cheaper from a computational perspective. In order to metrics, as well as being cheaper from a computational perspective. In order to
make a more informed choice for the best hyperparameters, the performance of all make a more informed choice for the best hyperparameters, the simulation
three combinations has been analysed. performance of all three combinations has been analysed.
\clearpage \clearpage
@ -376,7 +378,7 @@ The results for the \acrshort{svgp} model, presented in
Table~\ref{tab:SVGP_loss_functions} are much less ambiguous. The \model{1}{2}{3} Table~\ref{tab:SVGP_loss_functions} are much less ambiguous. The \model{1}{2}{3}
model has the best performance according to all four metrics, with most of the model has the best performance according to all four metrics, with most of the
other combinations scoring much worse on the \acrshort{msll} and \acrshort{lpd} other combinations scoring much worse on the \acrshort{msll} and \acrshort{lpd}
loss functions. This has therefore been chosen as the model for the full year loss functions. This has, therefore, been chosen as the model for the full year
simulations. simulations.
@ -453,16 +455,21 @@ Appendix~\ref{apx:hyperparams_gp}, Figure~\ref{fig:GP_313_test_validation},
where the model has much worse performance on the testing dataset predictions where the model has much worse performance on the testing dataset predictions
than the other two models. than the other two models.
Overall, the performance of the three models in simulation mode is consistent The performance of the three models in simulation mode is consistent with the
with the previously found results. It is of note that neither the model that previously found results. It is of note that neither the model that scored the
performed the best on the \acrshort{rmse}/\acrshort{smse}, \model{1}{2}{3}, nor best on the \acrshort{rmse}/\acrshort{smse}, \model{1}{2}{3}, nor the one that
the one that had the best \acrshort{msll}/\acrshort{lpd}, perform the best under had the best \acrshort{msll}/\acrshort{lpd}, perform the best under a simulation
a simulation scenario. In the case of the former it is due to numerical scenario. In the case of the former it is due to numerical instability, the
instability, the training/ prediction often failing depending on the inputs. On training/ prediction often failing depending on the inputs. On the other hand,
the other hand, in the case of the latter, only focusing on the in the case of the latter, only focusing on the \acrshort{msll}/\acrshort{lpd}
\acrshort{msll}/\acrshort{lpd} performance metrics can lead to over fitted performance metrics can lead to very conservative models, that give good and
models, that give good and confident one-step ahead predictions, while still confident one-step ahead predictions, while still unable to fit the true
unable to fit the true behaviour of the plant. behaviour of the plant.
Overall, the \model{2}{1}{3} performed the best in the simulation scenario,
while still having good performance on all loss functions. In implementation,
however, this model turned out to be very unstable, and the more conservative
\model{1}{1}{3} model was used instead.
\clearpage \clearpage

View file

@ -25,10 +25,11 @@ The optimization problem is therefore defined as follows:
\end{align} \end{align}
\end{subequations} \end{subequations}
where $y_{ref, t}$ is the reference temperature at time t, $\mathbf{x}_{t}$ is where $y_{ref, t}$ is the reference temperature at time t, $\mathcal{U}$ is the
the GP input vector at time t, composed of the exogenous autoregressive inputs set of allowed inputs, $\mathbf{x}_{t}$ is the GP input vector at time t,
$\mathbf{w}_{t}$, the autoregressive controlled inputs $\mathbf{u}_{t}$ and the composed of the exogenous autoregressive inputs $\mathbf{w}_{t}$, the
autoregressive outputs $\mathbf{y}_{t}$. autoregressive controlled inputs $\mathbf{u}_{t}$ and the autoregressive outputs
$\mathbf{y}_{t}$.
\subsection{Temperature reference}\label{sec:reference_temperature} \subsection{Temperature reference}\label{sec:reference_temperature}

View file

@ -42,8 +42,8 @@ communicate over TCP, these elements can all be implemented completely
separately, which is much more similar to a real-life implementation. separately, which is much more similar to a real-life implementation.
With this structure, the only information received and sent by the Python With this structure, the only information received and sent by the Python
controller is the actual sampled data, without any additional information. And controller is the actual sampled data, without any additional information.
while the controller needs information on the control horizon in order to read While the controller needs information on the control horizon in order to read
the correct amount of data for the weather predictions and to properly generate the correct amount of data for the weather predictions and to properly generate
the optimization problem, the discrete/continuous transition and vice-versa the optimization problem, the discrete/continuous transition and vice-versa
happens on the Simulink side. This simplifies the adjustment of the sampling happens on the Simulink side. This simplifies the adjustment of the sampling
@ -54,10 +54,10 @@ The weather prediction is done using the information present in the CARNOT
\acrshort{wdb} object. Since the sampling time and control horizon of the \acrshort{wdb} object. Since the sampling time and control horizon of the
controller can be adjusted, the required weather predictions can lie within an controller can be adjusted, the required weather predictions can lie within an
arbitrary time interval. At each sampling point, the weather measurements are arbitrary time interval. At each sampling point, the weather measurements are
linearly interpolated for the span of time ranging from the most recent piece-wise linearly interpolated for the span of time ranging from the most
measurement to the next measurement after the last required prediction time. recent measurement to the next measurement after the last required prediction
This provides a better approximation that pure linear interpolation over the time. This provides a better approximation than pure linear interpolation over
starting and ending points, while retaining a simple implementation. the starting and ending points, while retaining a simple implementation.
\subsection{Gaussian Processes} \subsection{Gaussian Processes}
@ -67,11 +67,11 @@ This means that naive implementations can get too expensive in terms of
computation time very quickly. computation time very quickly.
In order to have as smallest of a bottleneck as possible when dealing with In order to have as smallest of a bottleneck as possible when dealing with
\acrshort{gp}s, a very optimized implementation of \acrlong{gp} Models was \acrshort{gp}s, a very fast implementation of \acrlong{gp} Models was used, in
used, in the form of GPflow~\cite{matthewsGPflowGaussianProcess2017}. It is the form of GPflow~\cite{matthewsGPflowGaussianProcess2017}. It is based on
based on TensorFlow~\cite{tensorflow2015-whitepaper}, which has very efficient TensorFlow~\cite{tensorflow2015-whitepaper}, which has very efficient
implementation of all the necessary Linear Algebra operations. Another benefit of implementation of all the necessary Linear Algebra operations. Another benefit
this implementation is the very simple use of any additional computational of this implementation is the very simple use of any additional computational
resources, such as a GPU, TPU, etc. resources, such as a GPU, TPU, etc.
\subsubsection{Classical Gaussian Process training} \subsubsection{Classical Gaussian Process training}
@ -129,8 +129,8 @@ number of function calls by around an order of magnitude, which already
drastically reduces computation time. drastically reduces computation time.
Another significant speed improvement comes from transforming the Python calls Another significant speed improvement comes from transforming the Python calls
to TensofFlow into native tf-functions. This change incurs a small overhead the to TensorFlow into native tf-functions. This change incurs a small overhead the
first time the optimization problem is run since all the TensorFlow functions first time the optimization problem is run, since all the TensorFlow functions
have to be compiled before execution, but afterwards speeds up the execution by have to be compiled before execution, but afterwards speeds up the execution by
around another order of magnitude. around another order of magnitude.
@ -148,7 +148,7 @@ Equation~\ref{eq:optimal_control_problem} becomes very nonlinear quite fast. In
fact, due to the autoregressive structure of the \acrshort{gp}, the predicted fact, due to the autoregressive structure of the \acrshort{gp}, the predicted
temperature at time t is passed as an input to the model at time $t+1$. A simple temperature at time t is passed as an input to the model at time $t+1$. A simple
recursive implementation of the Optimization Problem becomes intractable after recursive implementation of the Optimization Problem becomes intractable after
only 3 --- 4 prediction steps. only 3~---~4 prediction steps.
In order to solve this problem, a new OCP is introduced. It has a much sparser In order to solve this problem, a new OCP is introduced. It has a much sparser
structure, in exchange for a larger number of variables. This turns out to be structure, in exchange for a larger number of variables. This turns out to be
@ -183,20 +183,19 @@ the intermediate results for analysis.
In the beginning of the experiment there is no information available on the In the beginning of the experiment there is no information available on the
building's thermal behaviour. For this part of the simulation, the controller building's thermal behaviour. For this part of the simulation, the controller
switches to a \acrshort{pi} controller until it gathers enough data to train a switches to a \acrshort{pi} controller with random disturbances until it gathers
\acrshort{gp} model. The signal is then disturbed by a random signal before enough data to train a \acrshort{gp} model. This ensured that the building is
being applied to the CARNOT building. This ensured that the building is sufficiently excited to capture its dynamics, while maintaining the temperature
sufficiently excited to capture its dynamics, while maintaining the within an acceptable range (~15 --- 25 $\degree$C).
temperature within an acceptable range (~15 --- 25 $\degree$C).
Once enough data has been captured, the Python controller trains the Once enough data has been captured all the features are first scaled to the
\acrshort{gp} model and switches to tracking the appropriate SIA 180:2014 range [-1, 1] in order to reduce the possibility of numerical instabilities. The
reference temperature (cf. Section~\ref{sec:reference_temperature}). Python controller then trains the \acrshort{gp} model and switches to tracking
the appropriate SIA 180:2014 reference temperature (cf.
Section~\ref{sec:reference_temperature}).
For the case of the \acrshort{svgp}, a new model is trained once enough data is For the case of the \acrshort{svgp}, a new model is trained once enough data is
gathered. The implementations tested were updated once a day, either on the gathered. The implementations tested were updated once a day, either on the
whole historical set of data, or on a window of the last five days of data. whole historical set of data, or on a window of the last five days of data.
% TODO [Implementation] Add info on scaling
\clearpage \clearpage

View file

@ -1,10 +1,10 @@
\section{Results}\label{sec:results} \section{Results}\label{sec:results}
% TODO [Results] Add info on control horizon
This section focuses on the presentation and interpretation of the year-long This section focuses on the presentation and interpretation of the year-long
simulation of the control schemes present previously. simulation of the control schemes present previously. All the control schemes
analysed in this Section have been done with a sampling time of 15 minutes and a
control horizon of 8 steps.
Section~\ref{sec:GP_results} analyses the results of a conventional Section~\ref{sec:GP_results} analyses the results of a conventional
\acrlong{gp} Model trained on the first five days of gathered data. The models \acrlong{gp} Model trained on the first five days of gathered data. The models
@ -25,7 +25,7 @@ is then employed for the rest of the year.
With a sampling time of 15 minutes, the model is trained on 480 points of data. With a sampling time of 15 minutes, the model is trained on 480 points of data.
This size of the identification dataset is enough to learn the behaviour of the This size of the identification dataset is enough to learn the behaviour of the
plant, without being too complex to solve from a numerical perspective, the plant, without being too complex to solve from a numerical perspective. The
current implementation takes roughly 1.5 seconds of computation time per step. current implementation takes roughly 1.5 seconds of computation time per step.
For reference, identifying a model on 15 days worth of experimental data (1440 For reference, identifying a model on 15 days worth of experimental data (1440
points) makes simulation time approximately 11 --- 14 seconds per step, or points) makes simulation time approximately 11 --- 14 seconds per step, or
@ -39,8 +39,8 @@ $\degree$C in the stable part of the simulation. The offset becomes much larger
once the reference temperature starts moving from the initial constant value. once the reference temperature starts moving from the initial constant value.
The controller becomes completely unstable around the middle of July, and can The controller becomes completely unstable around the middle of July, and can
only regain some stability at the middle of October. It is also possible to note only regain some stability at the middle of October. It is also possible to note
that from mid-October --- end-December the controller has very similar that from mid-October to end-December the controller has very similar
performance to that exhibited in the beginning of the year, namely January --- performance to that exhibited in the beginning of the year, namely January to
end-February. end-February.
\begin{figure}[ht] \begin{figure}[ht]
@ -52,10 +52,10 @@ end-February.
\end{figure} \end{figure}
This very large difference in performance could be explained by the change in This very large difference in performance could be explained by the change in
weather during the year. The winter months of the beginning of the year and end weather during the year. The winter months of the beginning and end of the year
of year exhibit similar performance, the spring months already make the exhibit similar performance. The spring months already make the controller less
controller less stable than at the start of the year, while the drastic stable than at the start of the year, while the drastic temperature changes in
temperature changes in the summer make the controller completely unstable. the summer make the controller completely unstable.
\clearpage \clearpage
@ -76,14 +76,14 @@ occurring during the winter months.
Figure~\ref{fig:GP_first_model_performance} analyses the 20-step ahead Figure~\ref{fig:GP_first_model_performance} analyses the 20-step ahead
simulation performance of the identified model over the course of the year. At simulation performance of the identified model over the course of the year. At
experimental step 250 the controller is still gathering data. It is therefore experimental step 250, the controller is still gathering data. It is therefore
expected that the identified model will be capable of reproducing this data. At expected that the identified model will be capable of reproducing this data. At
step 500, 20 steps after identification, the model correctly steers the internal step 500, 20 steps after identification, the model correctly steers the internal
temperature towards the reference temperature. On the flip side, already at temperature towards the reference temperature. On the flip side, already at
experimental steps 750 and 1000, only 9 days into the simulation, the model is experimental steps 750 and 1000, only 9 days into the simulation, the model is
unable to properly simulate the behaviour of the plant, with the maximum unable to properly simulate the behaviour of the plant, with the maximum
difference at the end of the simulation reaching 0.75 and 1.5 $\degree$C difference at the end of the simulation reaching 0.75 $\degree$C and 1.5
respectively. $\degree$C respectively.
\begin{figure}[ht] \begin{figure}[ht]
\centering \centering
@ -144,9 +144,12 @@ The results of this setup are presented in
Figure~\ref{fig:SVGP_fullyear_simulation}. It can already be seen that this Figure~\ref{fig:SVGP_fullyear_simulation}. It can already be seen that this
setup performs much better than the initial one. The only large deviations from setup performs much better than the initial one. The only large deviations from
the reference temperature are due to cold --- when the \acrshort{hvac}'s limited the reference temperature are due to cold --- when the \acrshort{hvac}'s limited
heat capacity is unable to maintain the proper temperature. heat capacity is unable to maintain the proper temperature. Additionnaly, the
\acrshort{svgp} controller takes around 250-300ms of computation time for each
simulation time, decreasing the computational cost of the original \acrshort{gp}
by a factor of six.
% TODO: [Results] Add info on SVGP vs GP computation speed
\begin{figure}[ht] \begin{figure}[ht]
\centering \centering
@ -194,7 +197,7 @@ starting at 107500 and 11000 points.
\centering \centering
\includegraphics[width = \includegraphics[width =
\textwidth]{Plots/1_SVGP_480pts_inf_window_12_averageYear_first_model_performance.pdf} \textwidth]{Plots/1_SVGP_480pts_inf_window_12_averageYear_first_model_performance.pdf}
\caption{GP first model performance} \caption{SVGP first model performance}
\label{fig:SVGP_first_model_performance} \label{fig:SVGP_first_model_performance}
\end{figure} \end{figure}
@ -214,7 +217,7 @@ than the classical \acrshort{gp} model to capture the building dynamics.
\centering \centering
\includegraphics[width = \includegraphics[width =
\textwidth]{Plots/1_SVGP_480pts_inf_window_12_averageYear_later_model_performance.pdf} \textwidth]{Plots/1_SVGP_480pts_inf_window_12_averageYear_later_model_performance.pdf}
\caption{GP later model performance} \caption{SVGP later model performance}
\label{fig:SVGP_later_model_performance} \label{fig:SVGP_later_model_performance}
\end{figure} \end{figure}
@ -229,7 +232,7 @@ respectively.
\centering \centering
\includegraphics[width = \includegraphics[width =
\textwidth]{Plots/1_SVGP_480pts_inf_window_12_averageYear_last_model_performance.pdf} \textwidth]{Plots/1_SVGP_480pts_inf_window_12_averageYear_last_model_performance.pdf}
\caption{GP last model performance} \caption{SVGP last model performance}
\label{fig:SVGP_last_model_performance} \label{fig:SVGP_last_model_performance}
\end{figure} \end{figure}
@ -255,12 +258,12 @@ control scheme:
closed-loop operation, will the performance deteriorate drastically if closed-loop operation, will the performance deteriorate drastically if
the first model is trained on less data? the first model is trained on less data?
\item How much information can the model extract from closed-loop operation? \item How much information can the model extract from closed-loop operation?
Would a model trained on only the last five days of closed-loop Would a model trained on a window of only the last five days of
operation data be able to perform correctly? closed-loop operation data be able to perform correctly?
\end{itemize} \end{itemize}
These questions will be further analysed in the Sections~\ref{sec:svgp_window} These questions will be further analysed in the Sections~\ref{sec:svgp_96pts}
and~\ref{sec:svgp_96pts} respectively. and~\ref{sec:svgp_window} respectively.
\clearpage \clearpage
@ -299,8 +302,8 @@ months. This might be due to the fact that during the colder months, the
additional heat to the system. additional heat to the system.
A similar trend can be observed for the evolution of the input's A similar trend can be observed for the evolution of the input's
hyperparameters, with the exception that the first lag of the controlled input, hyperparameters, with the exception that the first lag of the controlled input
$u1,1$ remains the most important over the course of the year. ($u1,1$) remains the most important over the course of the year.
For the lags of the measured output it can be seen that, over the course of the For the lags of the measured output it can be seen that, over the course of the
year, the importance of the first lag decreases, while that of the second and year, the importance of the first lag decreases, while that of the second and
@ -342,7 +345,7 @@ refinements being done as data is added to the system.
One question that could be addressed given these mostly linear kernel terms is One question that could be addressed given these mostly linear kernel terms is
how well would an \acrshort{svgp} model perform with a linear kernel. how well would an \acrshort{svgp} model perform with a linear kernel.
Intuition would hint that it should still be able to track the reference Intuition would hint that it should still be able to track the reference
temperature, albeit not as precisely due to the correlation that diminished much temperature, albeit not as precisely due to the correlation that diminishes much
slower when the two points are closer together in the \acrshort{se} kernel. This slower when the two points are closer together in the \acrshort{se} kernel. This
will be further investigated in Section~\ref{sec:svgp_linear}. will be further investigated in Section~\ref{sec:svgp_linear}.
@ -351,7 +354,7 @@ will be further investigated in Section~\ref{sec:svgp_linear}.
\subsection{SVGP with one day of starting data}\label{sec:svgp_96pts} \subsection{SVGP with one day of starting data}\label{sec:svgp_96pts}
As previously discussed in Section~\ref{sec:SVGP_results}, the \acrshort{svgp} As previously discussed in Section~\ref{sec:SVGP_results}, the \acrshort{svgp}
model is able to properly adapt given new information, overtime refining it's model is able to properly adapt given new information, overtime refining its
understanding of the plant's dynamics. understanding of the plant's dynamics.
Analyzing the results of a simulation done on only one day's worth of initial Analyzing the results of a simulation done on only one day's worth of initial
@ -381,26 +384,26 @@ for cumbersome and potentially costly initial experiments for gathering data.
\subsection{SVGP with a five days moving window}\label{sec:svgp_window} \subsection{SVGP with a five days moving window}\label{sec:svgp_window}
This section presents the result of running a different control scheme. Here, This section presents the result of running a different control scheme. Here, as
as the base \acrshort{svgp} model, it is first trained on 5 days worth of data, was the case for the base \acrshort{svgp} model, it is first trained on 5 days
with the difference being that each new model is only identified using the last worth of data, with the difference being that each new model is only identified
five days' worth of data. This should provide an insight on whether the using the last five days' worth of data. This should provide an insight on
\acrshort{svgp} model is able to understand model dynamics only based on whether the \acrshort{svgp} model is able to understand model dynamics only
closed-loop operation. based on closed-loop operation.
\begin{figure}[ht] \begin{figure}[ht]
\centering \centering
\includegraphics[width = \includegraphics[width =
\textwidth]{Plots/5_SVGP_480pts_480pts_window_12_averageYear_fullyear.pdf} \textwidth]{Plots/5_SVGP_480pts_480pts_window_12_averageYear_fullyear.pdf}
\caption{SVGP full year simulation} \caption{Windowed SVGP full year simulation}
\label{fig:SVGP_480window_fullyear_simulation} \label{fig:SVGP_480window_fullyear_simulation}
\end{figure} \end{figure}
As it can be seen in Figure~\ref{fig:SVGP_480window_fullyear_simulation}, this As it can be seen in Figure~\ref{fig:SVGP_480window_fullyear_simulation}, this
model is unable to properly track the reference temperature. In fact, five days model is unable to properly track the reference temperature. In fact, five days
after the identification the model forgets all the initial data and becomes after the identification, the model forgets all the initial data and becomes
unstable. This instability then generates enough excitation of the plant for the unstable. This instability then generates enough excitation of the plant for the
model to again learn its behaviour. This cycle repeats every five days, when the model, to again learn its behaviour. This cycle repeats every five days, when the
controller becomes unstable. In the stable regions, however, the controller is controller becomes unstable. In the stable regions, however, the controller is
able to track the reference temperature. able to track the reference temperature.
@ -422,7 +425,7 @@ nuanced details of the CARNOT model dynamics.
\centering \centering
\includegraphics[width = \includegraphics[width =
\textwidth]{Plots/10_SVGP_480pts_inf_window_12_averageYear_LinearKernel_fullyear.pdf} \textwidth]{Plots/10_SVGP_480pts_inf_window_12_averageYear_LinearKernel_fullyear.pdf}
\caption{SVGP full year simulation} \caption{Linear SVGP full year simulation}
\label{fig:SVGP_linear_fullyear_simulation} \label{fig:SVGP_linear_fullyear_simulation}
\end{figure} \end{figure}
@ -430,9 +433,6 @@ nuanced details of the CARNOT model dynamics.
\subsection{Comparative analysis} \subsection{Comparative analysis}
This section will compare all the results presented in the previous Sections and
try to analyze the differences and their origin.
Presented in Table~\ref{tab:Model_comparations} are the Mean Error, Error Presented in Table~\ref{tab:Model_comparations} are the Mean Error, Error
Variance and Mean Absolute Error for the full year simulation for the three Variance and Mean Absolute Error for the full year simulation for the three
stable \acrshort{svgp} models, as well as the classical \acrshort{gp} model. stable \acrshort{svgp} models, as well as the classical \acrshort{gp} model.
@ -467,12 +467,12 @@ The two \acrshort{svgp} models with \acrlong{se} kernels perform the best. They
have a comparable performance, with very small differences in Mean Absolute have a comparable performance, with very small differences in Mean Absolute
Error and Error variance. This leads to the conclusion that the \acrshort{svgp} Error and Error variance. This leads to the conclusion that the \acrshort{svgp}
models can be deployed with less explicit identification data, but they will models can be deployed with less explicit identification data, but they will
continue to improve over the course of the year as the building passes through continue to improve over the course of the year, as the building passes through
different regions of the state space and more data is collected. different regions of the state space and more data is collected.
These results do not, however, discredit the use of \acrlong{gp} for use in a These results do not, however, discredit the use of \acrlong{gp} for employment
multi-seasonal situation. As shown before, given the same amount of data and in a multi-seasonal situation. As shown before, given the same amount of data
ignoring the computational cost, they perform better than the alternative and ignoring the computational cost, they perform better than the alternative
\acrshort{svgp} models. The bad initial performance could be mitigated by \acrshort{svgp} models. The bad initial performance could be mitigated by
sampling the identification data at different points in time during multiple sampling the identification data at different points in time during multiple
experiments, updating a fixed-size dataset based on the gained information, as experiments, updating a fixed-size dataset based on the gained information, as

View file

@ -5,22 +5,22 @@ simulation for a classical \acrshort{gp} model, as well as a few incarnations of
\acrshort{svgp} models. The results show that the \acrshort{svgp} have much \acrshort{svgp} models. The results show that the \acrshort{svgp} have much
better performance, mainly due to the possibility of updating the model better performance, mainly due to the possibility of updating the model
throughout the year. The \acrshort{svgp} models also present a computational throughout the year. The \acrshort{svgp} models also present a computational
cost advantage both in training and in evaluation due to several approximations cost advantage both in training and in evaluation, due to several approximations
shown in Section~\ref{sec:gaussian_processes}. shown in Section~\ref{sec:gaussian_processes}.
Focusing on the \acrlong{gp} models, there could be several ways of improving Focusing on the \acrlong{gp} models, there could be several ways of improving
its performance, as noted previously: a more varied identification dataset and its performance, as noted previously: a more varied identification dataset and
smart update of a fixed-size data dictionary according to information gain could smart update of a fixed-size data dictionary according to information gain,
mitigate the present problems. could mitigate the present problems.
Using a Sparse \acrshort{gp} without also replacing the maximum log likelihood Using a Sparse \acrshort{gp} without replacing the maximum log likelihood
with the \acrshort{elbo} could improve performance of the \acrshort{gp} model at with the \acrshort{elbo} could improve performance of the \acrshort{gp} model at
the expense of training time. the expense of training time.
An additional change that could be made is inclusion of the most amount of prior An additional change that could be made is inclusion of the most amount of prior
information possible through setting a more refined kernel, as well as adding information possible through setting a more refined kernel, as well as adding
prior information on all the model hyperparameters when available. This approach prior information on all the model hyperparameters when available. This approach
however goes against the "spirit" of black-box approaches since significant however goes against the "spirit" of black-box approaches, since significant
insight into the physics of the plant is required in order to properly model and insight into the physics of the plant is required in order to properly model and
implement this information. implement this information.
@ -29,14 +29,14 @@ not properly addressed in this work. First, the size of the inducing dataset was
chosen experimentally until it was found to accurately reproduce the manually chosen experimentally until it was found to accurately reproduce the manually
collected experimental data. In order to better use the available computational collected experimental data. In order to better use the available computational
resources, this value could be found programmatically in a way to minimize resources, this value could be found programmatically in a way to minimize
evaluation time while still providing good performance. Another possibility is evaluation time, while still providing good performance. Another possibility is
the periodic re-evaluation of this value when new data comes in, since as more the periodic re-evaluation of this value when new data comes in, since as more
and more data is collected the model becomes more complex, and in general more and more data is collected the model becomes more complex, and in general more
inducing locations could be necessary to properly reproduce the training data. inducing locations could be necessary to properly reproduce the training data.
Finally, none of the presented controllers take into account occupancy rates or adapt to Finally, none of the presented controllers take into account occupancy rates or
possible changes in the real building, such as adding or removing furniture, adapt to possible changes in the real building, such as adding or removing
deteriorating insulation and so on. The presented update methods only deals with furniture, deteriorating insulation and so on. The presented update methods only
adding information on behaviour in different state space regions, i.e deals with adding information on behaviour in different state space regions, i.e
\textit{learning}, and their ability to \textit{adapt} to changes in the actual \textit{learning}, and their ability to \textit{adapt} to changes in the actual
plant's behaviour should be further addressed. plant's behaviour should be further addressed.

View file

@ -1,4 +1,34 @@
\section{Conclusion} \section{Conclusion}
The aim of this project was to analyse the performance of \acrshort{gp} based
controllers for use in longer lasting implementations, where differences in
building behaviour become important compared to the initially available data.
First, the performance of a classical \acrshort{gp} model trained on 5 days
worth of experimental data was analysed. This model turned out to be unable to
correctly extrapolate building behaviour as the weather changed throughout the
year.
Several \acrshort{svgp} implementations were then analysed. They turned out to
provide important benefits over the classical models, such as the ability to
easily scale when new data is being added and the much reduced computational
effort required. They do however present some downsides, namely increasing the
number of hyperparameters by having to choose the number of inducing locations,
as well as performing worse than then classical \acrshort{gp} implementation
given the same amount of data.
Finally, the possible improvements to the current implementations have been
addressed, noting that classical \acrshort{gp} implementations could also be
adapted to the \textit{learning control} paradigm, even if their implementation
could turn out to be much more involved and more computationally expensive than
the \acrshort{svgp} alternative.
\section*{Acknowledgements}
I would like to thank Koch Manuel Pascal for the great help provided during the
course of the project starting from the basics on CARNOT modelling, to helping
me better compare the performance of different controllers, as well as Professor
Jones, whose insights were always very guiding, while still allowing me to
discover everything on my own.
\clearpage \clearpage

View file

@ -7,14 +7,14 @@
\begin{figure}[ht] \begin{figure}[ht]
\centering \centering
\includegraphics[width = \textwidth]{Plots/GP_113_training_performance.pdf} \includegraphics[width = \textwidth]{Plots/GP_113_training_performance.pdf}
\caption{} \caption{Prediction performance of \model{1}{1}{3} on the training dataset}
\label{fig:GP_train_validation} \label{fig:GP_train_validation}
\end{figure} \end{figure}
\begin{figure}[ht] \begin{figure}[ht]
\centering \centering
\includegraphics[width = \textwidth]{Plots/GP_113_test_performance.pdf} \includegraphics[width = \textwidth]{Plots/GP_113_test_performance.pdf}
\caption{} \caption{Prediction performance of \model{1}{1}{3} on the test dataset}
\label{fig:GP_test_validation} \label{fig:GP_test_validation}
\end{figure} \end{figure}
@ -25,14 +25,14 @@
\begin{figure}[ht] \begin{figure}[ht]
\centering \centering
\includegraphics[width = \textwidth]{Plots/GP_213_training_performance.pdf} \includegraphics[width = \textwidth]{Plots/GP_213_training_performance.pdf}
\caption{} \caption{Prediction performance of \model{2}{1}{3} on the training dataset}
\label{fig:GP_213_train_validation} \label{fig:GP_213_train_validation}
\end{figure} \end{figure}
\begin{figure}[ht] \begin{figure}[ht]
\centering \centering
\includegraphics[width = \textwidth]{Plots/GP_213_test_performance.pdf} \includegraphics[width = \textwidth]{Plots/GP_213_test_performance.pdf}
\caption{} \caption{Prediction performance of \model{2}{1}{3} on the test dataset}
\label{fig:GP_213_test_validation} \label{fig:GP_213_test_validation}
\end{figure} \end{figure}
@ -43,14 +43,14 @@
\begin{figure}[ht] \begin{figure}[ht]
\centering \centering
\includegraphics[width = \textwidth]{Plots/GP_313_training_performance.pdf} \includegraphics[width = \textwidth]{Plots/GP_313_training_performance.pdf}
\caption{} \caption{Prediction performance of \model{1}{1}{3} on the training dataset}
\label{fig:GP_313_train_validation} \label{fig:GP_313_train_validation}
\end{figure} \end{figure}
\begin{figure}[ht] \begin{figure}[ht]
\centering \centering
\includegraphics[width = \textwidth]{Plots/GP_313_test_performance.pdf} \includegraphics[width = \textwidth]{Plots/GP_313_test_performance.pdf}
\caption{} \caption{Prediction performance of \model{1}{1}{3} on the test dataset}
\label{fig:GP_313_test_validation} \label{fig:GP_313_test_validation}
\end{figure} \end{figure}
@ -64,14 +64,14 @@
\begin{figure}[ht] \begin{figure}[ht]
\centering \centering
\includegraphics[width = \textwidth]{Plots/SVGP_123_training_performance.pdf} \includegraphics[width = \textwidth]{Plots/SVGP_123_training_performance.pdf}
\caption{} \caption{Prediction performance of \model{1}{2}{3} on the training dataset}
\label{fig:SVGP_train_validation} \label{fig:SVGP_train_validation}
\end{figure} \end{figure}
\begin{figure}[ht] \begin{figure}[ht]
\centering \centering
\includegraphics[width = \textwidth]{Plots/SVGP_123_test_performance.pdf} \includegraphics[width = \textwidth]{Plots/SVGP_123_test_performance.pdf}
\caption{} \caption{Prediction performance of \model{1}{2}{3} on the test dataset}
\label{fig:SVGP_test_validation} \label{fig:SVGP_test_validation}
\end{figure} \end{figure}

View file

@ -4,7 +4,7 @@
\centering \centering
\includegraphics[width = \includegraphics[width =
\textwidth]{Plots/1_SVGP_480pts_inf_window_12_averageYear_evol_hyperparameters.pdf} \textwidth]{Plots/1_SVGP_480pts_inf_window_12_averageYear_evol_hyperparameters.pdf}
\caption{GP last model performance} \caption{Evolution of SVGP hyperparameters}
\label{fig:SVGP_evol_hyperparameters} \label{fig:SVGP_evol_hyperparameters}
\end{figure} \end{figure}

Binary file not shown.

View file

@ -83,6 +83,18 @@
number = {4} number = {4}
} }
@book{europeancommission.jointresearchcentre.EnergyConsumptionEnergy2018,
title = {Energy Consumption and Energy Efficiency Trends in the {{EU}}-28 for the Period 2000-2016.},
author = {{European Commission. Joint Research Centre.}},
date = {2018},
publisher = {{Publications Office}},
location = {{LU}},
url = {https://data.europa.eu/doi/10.2760/574824},
urldate = {2021-06-25},
file = {/home/radu/Zotero/storage/6WCHIEUC/European Commission. Joint Research Centre. - 2018 - Energy consumption and energy efficiency trends in.pdf},
langid = {english}
}
@article{f.holmgrenPvlibPythonPython2018, @article{f.holmgrenPvlibPythonPython2018,
title = {Pvlib Python: A Python Package for Modeling Solar Energy Systems}, title = {Pvlib Python: A Python Package for Modeling Solar Energy Systems},
shorttitle = {Pvlib Python}, shorttitle = {Pvlib Python},
@ -240,7 +252,7 @@
@article{kabzanLearningBasedModelPredictive2019, @article{kabzanLearningBasedModelPredictive2019,
title = {Learning-{{Based Model Predictive Control}} for {{Autonomous Racing}}}, title = {Learning-{{Based Model Predictive Control}} for {{Autonomous Racing}}},
author = {Kabzan, Juraj and Hewing, Lukas and Liniger, Alexander and Zeilinger, Melanie N.}, author = {Kabzan, J. and Hewing, Lukas and Liniger, Alexander and Zeilinger, Melanie N.},
date = {2019-10}, date = {2019-10},
journaltitle = {IEEE Robotics and Automation Letters}, journaltitle = {IEEE Robotics and Automation Letters},
volume = {4}, volume = {4},
@ -463,6 +475,26 @@
url = {https://www.tensorflow.org/} url = {https://www.tensorflow.org/}
} }
@article{tsemekiditzeiranakiAnalysisEUResidential2019,
title = {Analysis of the {{EU Residential Energy Consumption}}: {{Trends}} and {{Determinants}}},
shorttitle = {Analysis of the {{EU Residential Energy Consumption}}},
author = {Tsemekidi Tzeiranaki, Sofia and Bertoldi, Paolo and Diluiso, Francesca and Castellazzi, Luca and Economidou, Marina and Labanca, Nicola and Ribeiro Serrenho, Tiago and Zangheri, Paolo},
date = {2019-01},
journaltitle = {Energies},
volume = {12},
pages = {1065},
publisher = {{Multidisciplinary Digital Publishing Institute}},
doi = {10.3390/en12061065},
url = {https://www.mdpi.com/1996-1073/12/6/1065},
urldate = {2021-06-25},
abstract = {This article analyses the status and trends of the European Union (EU) residential energy consumption in light of the energy consumption targets set by the EU 2020 and 2030 energy and climate strategies. It assesses the energy efficiency progress from 2000 to 2016, using the official Eurostat data. In 2016, the residential energy consumption amounted to 25.71\% of the EU\’s final energy consumption, representing the second largest consuming sector after transport. Consumption-related data are discussed together with data on some main energy efficiency policies and energy consumption determinants, such as economic and population growth, weather conditions, and household and building characteristics. Indicators are identified to show the impact of specific determinants on energy consumption and a new indicator is proposed, drawing a closer link between energy trends and policy and technological changes in the sector. The analysis of these determinants highlights the complex dynamics behind the demand of energy in the residential sector. Decomposition analysis is carried out using the Logarithmic Mean Divisia Index technique to provide a more complete picture of the impact of various determinants (population, wealth, intensity, and weather) on the latest EU-28 residential energy consumption trends. The article provides a better understanding of the EU residential energy consumption, its drivers, the impact of current policies, and recommendations on future policies.},
file = {/home/radu/Zotero/storage/JFTJ9Z9Q/Tsemekidi Tzeiranaki et al. - 2019 - Analysis of the EU Residential Energy Consumption.pdf;/home/radu/Zotero/storage/RS4MYQ3A/1065.html},
issue = {6},
keywords = {energy efficiency policies,indicators,regression analysis,residential energy consumption},
langid = {english},
number = {6}
}
@online{WhatAreTypical2018, @online{WhatAreTypical2018,
title = {What Are Typical {{U}}-{{Values}} on Windows and Doors? | {{Aspire Bifolds Surrey}}}, title = {What Are Typical {{U}}-{{Values}} on Windows and Doors? | {{Aspire Bifolds Surrey}}},
shorttitle = {What Are Typical {{U}}-{{Values}} on Windows and Doors?}, shorttitle = {What Are Typical {{U}}-{{Values}} on Windows and Doors?},