WIP: Working on Implementation Section

This commit is contained in:
Radu C. Martin 2021-06-19 19:54:43 +02:00
parent 524074b2e2
commit d33f0fd4d6
47 changed files with 1842 additions and 27 deletions

View file

@ -1,7 +1,5 @@
\section{Introduction}
{\color{red}\lipsum[1-3]}
% TODO: [Introduction] Control for building regulation

View file

@ -13,9 +13,7 @@ Gaussian Processes for building control have been studied before in the context
of Demand Response, {\color{orange} where the buildings are used for their heat
capacity in order to reduce the stress on energy provides during peak load times}
{\color{red}\lipsum[1]}
% TODO: [Previous Research] Finish with need for adaptive schemes
{\color{red}\lipsum[2-3]}
\clearpage

View file

@ -1,22 +1,346 @@
\section{Gaussian Processes Background}
{\color{red} \lipsum[1-5]}
The \acrfull{gp} is a member of the \textit{kernel machines} class of algorithms
in the field of machine learning.
% TODO: [GP Background] General info and math background on GP
The formal definition~\cite{rasmussenGaussianProcessesMachine2006} of
\acrlong{gp} states that:
% TODO: [GP Background] Definition of GP (from Rasmussen book?)
% TODO: [GP Background] Some relevant information from the GP modeling textbook
\begin{displayquote}
A Gaussian Process is a collection of random variables, any finite number of
which have a joint Gaussian distribution.
\end{displayquote}
A \acrshort{gp} is completely specified by a mean and a covariance function:
\begin{equation}
\begin{aligned}
m(\mathbf{x}) &= \mathbb{E}[f(\mathbf{x})] \\
k(\mathbf{x}, \mathbf{x'}) &= \mathbb{E}[f(\mathbf{x} -
m(\mathbf{x}))(f(\mathbf{x'}) - m(\mathbf{x'}))]
\end{aligned}
\end{equation}
This notation is commonly abbreviated as:
\begin{equation}
f(\mathbf{x}) \sim \mathcal{GP}(m(\mathbf{x}), k(\mathbf{x}, \mathbf{x'}))
\end{equation}
Making the simplifying assumption of $m(\mathbf{x}) = 0$, we have the joint
distribution of the training outputs $f$, and the test outputs $f_*$ according
to the given prior:
\begin{equation}
\begin{bmatrix}
\mathbf{f} \\
\mathbf{f_*} \\
\end{bmatrix} =
\mathcal{N}\left(
\mathbf{0},
\begin{bmatrix}
K(X, X) & K(X, X_*) \\
K(X_*, X) & K(X_*, X_*) \\
\end{bmatrix}
\right)
\end{equation}
In the case of noisy observations, assuming $y = f + \epsilon$ with $\epsilon
\sim \mathcal{N}(0, \sigma_n^2)$, we have the following joint distribution:
\begin{equation}
\begin{bmatrix}
\mathbf{y} \\
\mathbf{f_*} \\
\end{bmatrix} =
\mathcal{N}\left(
\mathbf{0},
\begin{bmatrix}
K(X, X) + \sigma_n^2 I& K(X, X_*) \\
K(X_*, X) & K(X_*, X_*) \\
\end{bmatrix}
\right)
\end{equation}
which, for the rest of the section, will be used in the abbreviated form:
\begin{equation}
\begin{bmatrix}
\mathbf{y} \\
\mathbf{f_*} \\
\end{bmatrix} =
\mathcal{N}\left(
\mathbf{0},
\begin{bmatrix}
K + \sigma_n^2 I& K_*^T \\
K_* & K_{**} \\
\end{bmatrix}
\right)
\end{equation}
\subsection{Parameter learning}
"Training" a \acrshort{gp} is the process of finding the kernel parameters that
best explain the data. This is done by maximizing the probability density
function for the observations $y$i, also known as the marginal likelihood:
\begin{equation}\label{eq:gp_likelihood}
p(y) = \frac{1}{\sqrt{(2\pi)^{n}\det{\left(K + \sigma_n^2I\right)}}}
\exp{\left(-\frac{1}{2}y^T\left(K + \sigma_n^2I\right)^{-1}y\right)}
\end{equation}
In order to simplify computation in actual \acrshort{gp} implementations, the
value that gets maximized is the log of Equation~\ref{eq:gp_likelihood}, the log
marginal likelihood:
\begin{equation}\label{eq:gp_log_likelihood}
log(p(y)) = - \frac{1}{2}\log{\left(
\det{\left(
K + \sigma_n^2I
\right)}
\right)}
- \frac{1}{2}y^T\left(
K + \sigma_n^2I
\right)^{-1}y
- \frac{n}{2}\log{\left(2\pi\right)}
\end{equation}
\subsection{Prediction}
\begin{equation}
\begin{aligned}
\mathbf{f_*} = \mathbb{E}\left(f_*|X, \mathbf{y}, X_*\right) &=
K_*\left(K + \sigma_n^2I\right)^{-1}\mathbf{y} \\
cov(\mathbf{f_*}) &= K_{**} - K_*\left(K +\sigma_n^2I\right)^{-1}K_*^T \\
\end{aligned}
\end{equation}
Apply the zero mean \acrshort{gp} to the \textit{difference} between the
observations and the fixed mean function:
\begin{equation}
\bar{\mathbf{f}}_* = \mathbf{m}(X_*) + K_*\left(K +
\sigma_n^2I\right)^{-1}(\mathbf{y} - \mathbf{m}(X)) \\
\end{equation}
\subsection{Kernels}
The choice of the kernel is an important part for any kernel machine class
algorithm. It serves the purpose of shaping the behaviour of the \acrshort{gp}
by imposing a desired level of smoothness of the resulting functions, a
prediodicity, linearity, etc. This extends the use cases of the \acrshort{gp}
models while including any available prior information of the system to be
modeled.
% TODO: [GP Background] Different kernels used in dynamical systems with math
For the purpose of identifying a dynamical system, a few kernels are popular
choices~\cite{kocijanModellingControlDynamic2016}:
\subsection{SVGP}
% TODO: [GP Background] High level and math parts of SVGP
\subsubsection*{Squared Exponential Kernel}
% TODO: [GP Background] Add citation for SVGP (look in GPflow source code)
This kernel is used when the system to be modelled is assumed to be smooth and
continuous. The basic version of the \acrshort{se} kernel has the following form:
\begin{equation}
k(\mathbf{x}, \mathbf{x'}) = \sigma^2 \exp{\left(- \frac{1}{2}\frac{\norm{\mathbf{x} -
\mathbf{x'}}^2}{l^2}\right)}
\end{equation}
with the parameters $\sigma^2$ (model variance) and $l$ (lengthscale).
with the model variance $\sigma^2$ and lengthscale $l$ as parameters.
The lengthscale indicates how fast the correlation diminishes as the two points
get further apart from each other.
This function can be modified to use an arbitrary positive semi-definite matrix
$\Lambda^{-1}$.
This will impose different weights for each combination of $\mathbf{x}$ and
$\mathbf{x'}$'s dimensions:
\begin{equation}
\begin{aligned}
k(\mathbf{x}, \mathbf{x'})
&= \sigma^2\exp{\left[-\frac{1}{2} (\mathbf{x} - \mathbf{x'})^T \Lambda^{-1}
(\mathbf{x} - \mathbf{x'})\right]} \\
&= \sigma^2 \exp{\left[-\frac{1}{2}\sum_{d=1}^D w_d(x_d - x'_d)^2\right]}
\end{aligned}
\end{equation}
where $w_d = \frac{1}{l_d^2}; d = 1 ,\dots, D$, with D being the dimension of the
data.
The special case of $\Lambda^{-1} = \text{diag}{\left([l_1^{-2},\dots,l_D^{-2}]\right)}$
is equivalent to implementing different lengthscales on different regressors.
This can be used to asses the relative importance of each regressor through the
value of the hyperparameters. This is the \acrfull{ard} property.
\subsubsection*{Rational Quadratic Kernel}
The \acrfull{rq} Kernel can be intepreted as an infinite sum of \acrshort{se}
kernels with different lengthscales. It has the same smooth behaviour as the
\acrlong{se} Kernel, but can take into account the difference in function
behaviour for large scale vs small scale variations.
\begin{equation}
k(\mathbf{x}, \mathbf{x'}) = \sigma^2 \exp{\left(1+\frac{1}{2}\frac{\norm{\mathbf{x} -
\mathbf{x'}}^2}{\alpha l^2}\right)}^{-\alpha}
\end{equation}
When extended with \acrshort{ard} behaviour, it becomes:
\begin{equation}
k(\mathbf{x}, \mathbf{x'})
= \sigma^2\exp{\left[1 + \frac{1}{2} {(\mathbf{x} - \mathbf{x'})}^T \Lambda^{-1}
(\mathbf{x} - \mathbf{x'})\right]}^{-\alpha} \\
\end{equation}
\subsection{Extending GPs to larger datasets}
There exist multiple methods of scaling \acrshort{gp}s to be used on more data,
without inquiring the penalty of inverting the covariance matrix. An overview
and comparison of multiple methods is given
at~\cite{liuUnderstandingComparingScalable2019}.
For the scope of this project the choice of using the \acrfull{svgp} models has
been made, since it provides a very good balance of scalability, capability,
robustness and controllability~\cite{liuUnderstandingComparingScalable2019}.
The \acrlong{svgp} has been first introduced
by~\textcite{hensmanGaussianProcessesBig2013} as a way to scale the use of
\acrshort{gp}s to large datasets. A detailed explanation on the mathematics of
\acrshort{svgp}s and reasoning behind it is given
at~\cite{yiSparseVariationalGaussian2021}, with a very brief summary presented
below.
\subsubsection{Summarizing the training data}
Trivially downsampling the available data to a size that fits within the
computational budget is not the best solution, since it implies ignoring large
amounts of possibly otherwise unavailable information.
In order to solve the problem of reducing the dataset size without the downside
of discarding potentially useful information, a new set of random variables
$f(X_s)$, usually denoted as $f_s$ is introduced, with the requirement that this
new dataset has size $n_s$ smaller than the size $n$ of the original dataset.
The $X_s$ are called \textit{inducing locations}, and $f_s$ --- \textit{inducing
random variables}. They are said to summarize the data in the sense that a model
trained on this new dataset should be able to generate the original dataset with
a high probability.
The multivariate Gaussian distribution is used to establish the relationship
between $f_s$ and $f$, which will serve the role of the prior, now called the
sparse prior:
\begin{equation}
\begin{bmatrix}
\mathbf{f}(X) \\
\mathbf{f}(X_s) \\
\end{bmatrix} =
\mathcal{N}\left(
\mathbf{0},
\begin{bmatrix}
K(X, X) & K(X, X_s) \\
K(X_s, X) & K(X_s, X_s) \\
\end{bmatrix}
\right)
\end{equation}
\subsubsection{Evidence Lower Bound}
Computing the log likelihood is one of the most expensive parts of model
training, due to inversion of the kernel matrix term ${\left(K +
\sigma_n^2I\right)}^{-1}$ (cf. Equation~\ref{eq:gp_likelihood}) with an
algorithmic complexity of $\mathcal{O}(n^3)$.
In order to solve this problem, the log likelihood equation
(Equation~\ref{eq:gp_log_likelihood}) used for training a
classical \acrshort{gp} is replaced with an approximate value, that is
computationally tractable on larger sets of data.
The following derivation of the \acrshort{elbo} is based on the one presented
in~\cite{yangUnderstandingVariationalLower}.
Assume $X$ to be the observations, and $Z$ the set of hidden (latent)
variables --- the parameters of the \acrshort{gp} model. The posterior
distribution of the hidden variables can be written as follows:
\begin{equation}
p(Z|X) = \frac{p(X|Z)p(Z)}{p(X)} = \frac{p(X|Z)p(Z)}{\int_Z p(X,Z)}
\end{equation}
The main idea of Variational Bayesian methods is to replace the true posterior
distribution $p(Z|X)$ with an approximation $q(Z)$ that is easier to compute,
but is as close as possible to the original.
One measure of the closeness of the two distributions is the \acrfull{kl}
divergence, which for variational inference takes the following form:
\vspace{5pt}
\begin{equation}
\begin{aligned}
KL\left[q(Z)||p(Z|X)\right]
&= \int_Z q(Z)\log{\left(\frac{q(Z)}{p(Z|X)}\right)} \\
&= - \int_Z q(Z)\log{\left(\frac{q(Z|X)}{p(Z)}\right)} \\
&= - \left(
\int_Z q(Z)\log{\left(\frac{p(X,Z)}{q(Z)}\right)}
- \int_Z q(Z)\log{\left(p(X)\right)}
\right) \\
&= - \int_Z q(Z) \log{\left(\frac{p(X,Z)}{q(Z)}\right)}
+ \log{\left(p(X)\right)}\int_Z q(Z) \\
&= -L + \log{\left(p(X)\right)} \\
\end{aligned}
\end{equation}
\vspace{5pt}
where L is the \acrfull{elbo}. Rearranging this equation we get:
\begin{equation}
L = \log{\left(p(X)\right)} - KL\left[q(Z)||p(Z|X)\right]
\end{equation}
Since KL is always $\geq 0$, we get that $L \leq \log{\left(p(X)\right)}$ is a
lower bound of the log probability of observations.
\subsection{Gaussian Process Models for Dynamical
Systems}\label{sec:gp_dynamical_system}
In the context of Dynamical Systems Identification and Control, Gaussian
Processes are used to represent multiple different model structures, ranging
from state space and \acrshort{nfir} structures, to the more complex
\acrshort{narx}, \acrshort{noe} and \acrshort{narmax}.
The general form of an \acrfull{narx} model is as follows:
\begin{equation}
\hat{y}(k) =
f(w(k-1),\dots,w(k-l_w),y(k-1),\dots,y(k-l_y),u(k-1),\dots,u(k-l_u))
\end{equation}
where w are the exogenous inputs with a maximum lag of $l_w$, u are the model
inputs with a maximum lag of $l_u$, and y are the model outputs with a maximum
lag of $l_y$. This structure is a prediction model, since it uses the measured
values of y as regressors.
The \acrfull{noe} uses the past predictions $\hat{y}$ for future predictions:
\begin{equation}
\hat{y}(k) =
f(w(k-1),\dots,w(k-l_w),\hat{y}(k-1),\dots,\hat{y}(k-l_y),u(k-1),\dots,u(k-l_u))
\end{equation}
The \acrshort{noe} structure is therefore a \textit{simulation model}.
In order to get the best simulation results from a \acrshort{gp} model, the
\acrshort{noe} structure would have to be employed. Due to the high algorithmic
complexity of training and evaluating \acrshort{gp} models, this approach is
computationally untractable. In practice a \acrshort{narx} model will be trained,
which will be validated through multi-step ahead prediction.
\clearpage

View file

@ -1,12 +1,609 @@
\section{CARNOT model}
\section{CARNOT model}\label{sec:CARNOT}
In order to better analyze the different model training and update methods it
was decided to replace the physical \pdome\ building with a computer model.
This allows for faster-than-real-time simulations, as well as perfectly
reproducing the weather conditions and building response for direct comparison
of different control schemes over long periods of time.
The model is designed using the CARNOT
toolbox~\cite{lohmannEinfuehrungSoftwareMATLAB} for Simulink. It is based on the
CARNOT default \textit{Room Radiator} model, with the following canges:
\begin{itemize}
\item Only one of the two default rooms is used
\item The outside walls are replaced with windows
\item A window is added on the roof as an equivalent for the four skylights
presented in Section~\ref{sec:Physical_dimensions}
\end{itemize}
The resulting schema for the \pdome\ building is presented in
Figure~\ref{fig:CARNOT_polydome}. The following sections will focus on detailing
the choice of all the necessary model parameters.
\begin{figure}[ht]
\centering
\includegraphics[width = \textwidth]{Images/polydome_room_model.pdf}
\caption{Simulink Schema of the CARNOT \pdome\ building model}
\label{fig:CARNOT_polydome}
\end{figure}
\clearpage
The Simulink model is then completed by adding a \textit{Weather Data File}
containing weather measurements for a whole year, and a \textit{Weather
Prediction} block responsible for sending weather predictions to the MPC.\@ The
controller itself is defined in Python and is connected to Simulink via three
TCP/IP sockets: one for sending the control signal, one for reading the
temperature measurement, and one for reading the weather forecast.
The final Simulink schema is presented in Figure~\ref{fig:CARNOT_complete}:
\begin{figure}[ht]
\centering
\includegraphics[width = \textwidth]{Images/polydome_python.pdf}
\caption{Simulink Schema of the Complete Simulation}
\label{fig:CARNOT_complete}
\end{figure}
\clearpage
\subsection{Physical dimensions}\label{sec:Physical_dimensions}
The \pdome\ building is a dome-shaped, single-zone building serving the role
of classroom for audiences of around one hundred students.
Most of the necessary measurements are already available from the
\citetitle*{nattererPolydomeTimberShell1993} article by
\textcite{nattererPolydomeTimberShell1993}. It presents the \pdome\ geometry as
having a floor area of 25m $\times$ 25m, a maximum height of the building of 7m,
walls of a maximum height of 3m. The roof is a spherical cap with a radius of
approximately 28m.
One particularity of the \pdome\ is the presence of four large skylights on the
roof. An estimate of their size has been made using images from \textit{Google
Maps Satellite View}~\cite{GoogleMaps}, with the included measurement tool. These
measurements are presented in Figure~\ref{fig:Google_Maps_Skylights}. The
skylights are measured to be squares of edge 2.5m.
\begin{figure}[ht]
\centering
\includegraphics[width = 0.8\textwidth]{Images/google_maps_polydome_skylights}
\caption{Google Maps Satellite view of the \pdome\ building}
\label{fig:Google_Maps_Skylights}
\end{figure}
The only remaining missing piece of information is the \textit{stem wall}
height, which is the height of the walls from the ground on which the dome
ceiling directly resides. Due to the limited campus access, this measure has
also been approximated from a Google Maps image, presented in
Figure~\ref{fig:Google_Maps_Streetview}. An object of known size has been used
as reference, after which the following measurements have been done in
\citetitle{kimballGIMPGNUImage}~\cite{kimballGIMPGNUImage} using the
\textit{Measure Tool}.
The chosen reference object is the \pdome\ HVAC system, the full description of
which is presented in Section~\ref{sec:HVAC_parameters}, and which has a known
height of 2061mm \cite{aermecRoofTopManuelSelection}.
\begin{figure}[ht]
\centering
\includegraphics[width = \textwidth]{Images/polydome_streetview_annotated}
\caption{Google Maps StreetView view of the \pdome\ building}
\label{fig:Google_Maps_Streetview}
\end{figure}
The graphical analysis resulted in the measurements presented in
Table~\ref{tab:GIMP_measurements}:
\begin{table}[ht]
%\vspace{-8pt}
\centering
\begin{tabular}{||c c c c||}
\hline
Object & Size [px] & Size[mm] & Size[m]\\
\hline \hline
HVAC height & 70 & 2100 & 2.1 \\
Building height & 230 & 6900 & 6.9 \\
Stem wall & 45 & 1350 & 1.35 \\
Dome height & 185 & 5550 & 5.55 \\
\hline
\end{tabular}
\caption{Calculated Dome Dimensions}
\label{tab:GIMP_measurements}
\end{table}
For a validation of the results shown in Figure~\ref{tab:GIMP_measurements} it
is possible to compare the total \pdome\ building height measured in GIMP with
the value presented beforehand. The graphical measure of 6.9m is indeed very
close to the described height of 7m.
These measurements can be used to set the size parameters of all the nodes in
the model:
\begin{itemize}
\item Side windows with a size of 2m $\times$ 25m each
\item Roof window with a size of 5m $\times$ 5m, cumulative surface of all
the skylights
\item Ceiling with a size of 25m $\times$ 25m
\item Floor with a size of 25m $\times$ 25m
\end{itemize}
\subsection{Internal volume}
The \pdome\ building has a structure that is mostly based on a dome shape, with
the difference that the dome portion of the building does not reach the ground,
but stands above it at a height of $\approx 1.35m$ (cf.
Table~\ref{tab:GIMP_measurements}), with the large side windows extending to the
ground and creating a \textit{stem wall} for the dome to sit on.
The total internal volume of the \pdome\ building can therefore be approximated
by computing the sum volumes of the two elements: the stem and the dome.
Geometrically, a dome is a portion of a sphere cut off by a plane. Its volume
can therefore be computed in a similar fashion to that of a complete sphere.
A sphere can be regarded as volume of rotation of function $f(x)$ (cf.
Equation~\ref{eq:rot_func}) around the x axis, where $R$ is the sphere radius
and $x \in [0, 2r]$:
\begin{equation}\label{eq:rot_func}
f(x) = \sqrt{R^2 - (x-R)^2} = \sqrt{2Rx - x^2}
\end{equation}
It is therefore possible to compute the volume of a dome by integrating $f(x)$
up to the dome height $h$, as presented in Equation~\ref{eq:rot_vol}.
\begin{equation}\label{eq:rot_vol}
\begin{aligned}
V &= \pi \int_0^h f(x)^2 dx \\
&= \pi \int_0^h (2Rx - x^2)dx \\
&= \pi \left( Rx^2 - \frac{1}{3}x^3 \right)_0^h \\
&= \frac{\pi h^2}{3} (3R - h) \\
\end{aligned}
\end{equation}
The volume of the \pdome\ dome is computed by first finding the radius of the
complete sphere, the radius of curvature (cf.
Equation~\ref{eq:radius_curvature}, where $h$ is the height of the dome and $r$
is the radius of the base of the dome) and plugging it in
Equation~\ref{eq:rot_vol}.
\begin{equation}\label{eq:radius_curvature}
R_c = \frac{r^2 + h^2}{2h}
\end{equation}
The volume of the dome is then given by:
\begin{equation}
V_d = \frac{1}{3} \pi h^2 (3R_c - h) = \frac{1}{6} \pi h (3r^2 + h^2)
\end{equation}
The stem of the \pdome\ is approximated to a cube of edge 25m, and its volume can
therefore be calculated as:
\begin{equation}
V_s = l_s^2 * h_s
\end{equation}
The total volume of the building is then given as:
\begin{equation}
V = V_d + V_s = \frac{1}{6} \pi h (3r^2 + h^2) + l_s^2
\end{equation}
Numerically, considering a dome diameter of 28m, a dome height of 5.55m and a stem
wall edge of 25m, we get the approximate volume of the building:
\begin{equation}\label{eq:numerical_volume}
V = V_d + V_s = 1798.22m^3 + 843.75m^3 = 2641.97m^3 \approx 2650m^3
\end{equation}
The value presented in Equation~\ref{eq:numerical_volume} is used directly in
the \textit{room\_node} of the CARNOT model (cf.
Figure~\ref{fig:CARNOT_polydome}), as well as the calcualtion of the Air
Exchange Rate, presented in Section~\ref{sec:Air_Exchange_Rate}.
\subsection{Furniture}
The main function of the \pdome\ building is serving as a classroom for around
one hundred students. It has wood furniture consisting of chairs and tables, as
well as a wooden stage in the center of the building, meant to be used for
presentations. The building also contains a smaller room, housing the all the
necessary technical equipment (cf. Figure~\ref{fig:Google_Maps_Streetview}).
The most accurate way of including information on the furniture in the CARNOT
model would be to manually compute the mass, volume and materials for each of
the chairs, tables, etc.\ but due to the restricted access to the building a
simpler approximation has been made.
\textcite{johraNumericalAnalysisImpact2017} present a methodology to model the
furniture in buildings for multiple different materials, as well as an
\textit{equivalent indoor content material} that is meant to approximate the
furniture content of an office building. These values for mass content, surface
area, material density and thermal conductivity have been taken as the basis for
the \pdome\ furniture content approximation, with the assumption that, since the
\pdome\ is still mostly empty, it has approximately a quarter of the furniture
present in a fully furnished office.
The full set of furniture is therefore approximated in the CARNOT model as a
wall, with the numerical values for mass, surface, thickness and volume
presented below.
\subsubsection*{Surface}
% surface:
% 1/4 * 1.8 [m2/m2 of floor space] * 625 m2 surface = 140 m2
% 140 m2 = [7 20] m [height width]
The equivalent material is taken to have a surface of 1.8 $m^2$ per each $m^2$
of floor area~\cite{johraNumericalAnalysisImpact2017}. With a floor area of 625
$m^2$, and assuming the furnishing of the building is a quarter that of a
fully-furnished office, the \pdome\ furniture equivalent wall has a surface area
of:
\begin{equation}
S_f = \frac{1}{4} \cdot 1.8 \left[\frac{\text{m}^2}{\text{m}^2}\right]
\cdot 625\ \left[\text{m}^2\right] = 140\ \left[\text{m}^2\right]
\end{equation}
\subsubsection*{Mass}
% mass:
% 1/4 * 40 [kg/m2 of floor space] * 625 m2 surface = 6250 kg
The mass of the furniture equivalent wall is computed using the same
methodology, considering 40 kg of furniture mass per $m^2$ of floor surface.
\begin{equation}
M_f = \frac{1}{4} \cdot 40 \cdot 625\ \left[\text{m}^2\right] = 6250\
\left[\text{m}^2\right]
\end{equation}
\subsubsection*{Volume}
% volume:
% 6250[kg]/600[kg/m3] = 10.41 [m3]
The volume of the furniture equivalent wall is calculated with the help of the
equivalent mass and densities:
\begin{equation}
V_f = \frac{M_f}{\rho_f} = \frac{6250}{600}
\left[\frac{\text{kg}}{\text{kg}/\text{m}^3}\right]
= 10.41\ \left[\text{m}^3\right]
\end{equation}
\subsubsection*{Thickness}
% thickness:
%10.41[m3]/140[m2] = 0.075m = 7.5cm
The last parameter of the furniture equivalent wall is computed by dividing its
volume by the surface:
\begin{equation}
h_f = \frac{V_f}{S_f} = \frac{10.41}{140} \left[\frac{m^3}{m^2}\right] =
0.075\ \left[\text{m}\right] = 7.5\ \left[\text{cm}\right]
\end{equation}
\subsection{Physical dimensions}
\subsection{Material properties}
\subsection{HVAC parameters}
In order to better simulate the behaviour of the real \pdome\ building it is
necessary to approximate the building materials and their properties as close as
possible. This section goes into the detailes and arguments for the choice of
parameters for each of the CARNOT nodes' properties.
{\color{red}test}
\subsubsection{Windows}
The windows are supposed to be made of two glass panes of thickness 4mm each.
The values of the heat transfer coefficient (U-factor) can vary greatly for
different window technologies, but an educated guess can be made on the lower
and upper bounds based on the age and location of the building.
Window manufacturers state
the U-factor for double glass windows to be between 2.8 \(\frac{W}{m^2K}\) for
older manufacturing techniques and 1.2 \(\frac{W}{m^2K}\) for newer
models~\cite{WhatAreTypical2018}.
The US Energy Department states that the
typical U-factor values for modern window installations is in the range of 0.2
--- 1.2 \(\frac{W}{m^2K}\)\cite{GuideEnergyEfficientWindows}.
The European flat glass association claims that the maximum allowable U-factor
value for new window installations in the private sector buildings in
Switzerland is 1.5
\(\frac{W}{m^2K}\)~\cite{glassforeuropeMinimumPerformanceRequirements2018}.
Considering the aforementioned values, and the fact the the \pdome\ building was
built in 1993~\cite{nattererModelingMultilayerBeam2008}, the default U-factor of
1.8 \(\frac{W}{m^2K}\) has been deemed appropriate.
The total solar energy transmittance $[g]$ and transmittance in the visible
range $[v_g]$ can vary in the range of 0.1 --- 0.9 depending on manufacturing
technology, tint, etc. The average values of 0.7 and 0.65 respectively have been
chosen for this case.
The values for glass density and heat capacity have been left at their default
values of 2500 \(\frac{kg}{m^3}\) and 1008 \(\frac{J}{kgK}\) respectively.
\subsubsection{Roof and Floor}
%%% Roof
% [5cm wood, 10cm insulation, 5cm wood]
% Conductivity for each material
% Heat capacity for each material
% Density for each material
The roof structure has been assumed to be made out of 10cm of insulation,
enclosed on each side by 5cm of wood.
%%% Floor
% [5cm wood, 10cm insulation, 20cm concrete]
% Conductivity for each material
% Heat capacity for each material
% Density for each material
The floor composition has been taken as consisting of, from top to bottom, 5cm
wood, 10cm insulation followed by 20cm of concrete.
All the necessary values to characterise these materials have been taken
from~\cite{BuildingsHeatTransferData} and are presented in
Table~\ref{tab:material_properties}:
\begin{table}[ht]
%\vspace{-8pt}
\centering
\begin{tabular}{||c c c c||}
\hline
Material & Thermal Conductivity $[k]$ $[\frac{W}{mK}]$ & Specific Heat
Capacity $[c]$ $[\frac{J}{kgK}]$ & Density $[\rho]$ $[\frac{kg}{m^3}]$
\\
\hline \hline
Plywood & 0.12 & 1210 & 540 \\
Insulation & 0.03 & 1200 & 40 \\
Concrete & 1.0 & 700 & 2500 \\
\hline
\end{tabular}
\caption{Material properties for roof and floor elements}
\label{tab:material_properties}
\end{table}
\subsection{HVAC parameters}\label{sec:HVAC_parameters}
The \pdome\ is equiped with an \textit{AERMEC RTY-04} HVAC system. According to
the manufacturer's manual~\cite{aermecRoofTopManuelSelection}, this HVAC houses
two compressors, of power 11.2 kW and 8.4 kW respectively, an external
ventillator of power 1.67 kW, and a reflow ventillator of power 2 kW. The unit
has a typical Energy Efficiency Ratio (EER, cooling efficiency) of 4.9 --- 5.1
and a Coefficient of Performance (COP, heating efficiency) of 5.0, for a maximum
cooling capacity of 64.2 kW.
One particularity of this HVAC unit is that during summer only one of the two
compressors are running. This results in a higher EER, in the cases where the
full cooling capacity is not required.
\subsubsection*{Ventilation}
According to the manufacturer manual \cite{aermecRoofTopManuelSelection}, the
HVAC unit's external fan has an air debit ranging between 4900
$\text{m}^3/\text{h}$ and 7000 $\text{m}^3/\text{h}$.
\subsubsection*{Air Exchange Rate}\label{sec:Air_Exchange_Rate}
The air exchange rate, also known as `air changes per hour', represents the
number of times the air in a room gets replaced with new air. It can be
computed by dividing the air flow through the room by the room volume:
\begin{equation}
\text{Air exchange rate} = \frac{\text{Air flow}}{\text{Total volume}}
\end{equation}
In the case of the \pdome\ and its HVAC, this results in an airflow range of:
\begin{equation}
\begin{aligned}
\text{Air exchange rate} &= \frac{4900}{2650}
- \frac{7000}{2650} &\left[\frac{\text{m}^3/\text{h}}{\text{m}^3}\right]\\
&= 1.85 - 2.64 &\left[\frac{1}{\text{h}}\right]
\end{aligned}
\end{equation}
As it will be shown in Section~\ref{sec:CARNOT_experimental}, the external fan
is continuously running. The \textit{Air exchange rate} has therefore been fixed
to a value of 2.5 for the duration of the whole simulation. The real airflow
could, of course, vary depending on a multitude of external factors, but they
would require more precise measurements to estimate.
\subsection{Validating against experimental data}\label{sec:CARNOT_experimental}
In order to confirm the validity of the model it is necessary to compare the
CARNOT models' behaviour against that of the real \pdome\ building.
Section~\ref{sec:CARNOT_expdata} presents the available experimental data,
Section~\ref{sec:CARNOT_WDB} details the transformation of the available data
into CARNOT's WDB format, and Section~\ref{sec:CARNOT_comparison} details a
qualitative analysis of the differences.
\subsubsection{The available experimental data}\label{sec:CARNOT_expdata}
All the experimental data used for the validation of the CARNOT model has been
collected previously for another
study~\cite{fabiettiMultitimeScaleCoordination2018}, where it has been used to
identify a State Space model for the \pdome\ building.
The data has been collected in the time span of June to August 2017, and is
divided in seven different experiments, as presented in
Figure~\ref{tab:exp_dates}. The available measurements are the \textit{Outside
Temperature}, \textit{Solar Irradiation}, \textit{Electrical power consumption}
of the HVAC, and two measurements of \textit{Inside Temperature} in different
parts of the room.
\begin{table}[ht]
\centering
\begin{tabular}{||c c c||}
\hline
Exp no. & Start Date & End Date\\
\hline \hline
1 & 01.06.2017 20:00 & 03.06.2017 17:00 \\
2 & 10.06.2017 16:00 & 12.06.2017 06:00 \\
3 & 16.06.2017 20:00 & 19.06.2017 06:00 \\
4 & 19.06.2017 20:00 & 22.06.2017 06:00 \\
5 & 30.06.2017 20:00 & 03.07.2017 06:00 \\
6 & 07.07.2017 20:00 & 10.06.2017 06:00 \\
7 & 13.06.2017 20:00 & 20.06.2017 06:00 \\
\hline
\end{tabular}
\caption{\pdome\ experimental measurements}
\label{tab:exp_dates}
\end{table}
\clearpage
As mentioned previously, the external fan of the HVAC is constantly running.
This can be seen in Figure~\ref{fig:Polydome_electricity} as the electricity
consumption of the HVAC has a baseline of 1.67 kW of power consumption.
\begin{figure}[ht]
\centering
\includegraphics[width = \textwidth]{Plots/Fan_baseline.pdf}
\caption{Electrical Power consumption of the \pdome\ HVAC for Experiment 7}
\label{fig:Polydome_electricity}
\end{figure}
Figure~\ref{fig:Polydome_electricity} also gives an insight into the workings of
the HVAC when it comes to the combination of the two available compressors. The
instruction manual of the HVAC~\cite{aermecRoofTopManuelSelection} notes that in
summer only one of the compressors is running. This allows for a larger EER
value and thus better performance. We can see that this is the case for most of
the experiment, where the pwoer consumption caps at around 6 kW. There are,
however, moments during the first part of the experiment where the power
momentarily peaks over the 6 kW limit, and goes as high as around 9 kW. This
most probably happens when the HVAC decides that the difference between the
setpoint temperature and the actual measured values is too large.
Figure~\ref{fig:Polydome_exp7_settemp} presents the values of the setpoint
temperature and the measured internal temperature.
\begin{figure}[ht]
\centering
\includegraphics[width = \textwidth]{Plots/Exp_settemp.pdf}
\caption{Measured vs setpoint temperature of the HVAC for Experiment 7}
\label{fig:Polydome_exp7_settemp}
\end{figure}
A few observations can be made on these measurements. First, the second
compressor is indeed turned on during the first part of the experiment, when the
setpoint differs greatly from the measured temperature. Second, for the
beginning of Experiment 7, as well as the majority of the other experiments, the
setpoint temperature is the value that gets changed in order to excite the
system, and since the HVAC's controller is on during identification, it will
oscillate between using one or two compressors. Lastly, it is possible to notice
that the HVAC is not turned on during the night, with the exception of the
external fan, which runs continuously.
\subsubsection{The CARNOT WDB weather data format}\label{sec:CARNOT_WDB}
For a corect simulation of the building behaviour, CARNOT requires not only the
detailed definition of the building blocks/nodes, but also a very detailed set
of data on the weather conditions. This set includes detailed information on the
sun's position throughout the simulation (zenith and azimuth angles), the Direct
Normal Irradiance (DHI) and Direct Horizontal Irradiance (DNI), direct and
diffuse solar radiation on surface, as well as information on the ambient
temperature, humidity, precipitation, pressure, wind speed and direction, etc.
A detailed overview of each measurement necessary for a simulation is given in
the CARNOT user manual~\cite{CARNOTManual}.
In order to compare the CARNOT model's performance to that of the real \pdome\
it is necessary to simulate the CARNOT model under the same set of conditions as
the ones present during the experimental data collection. In order to do this,
all the missing values that are required by the simulation have to be filled. In
some cases, such as the sun angles it is possible to compute the exact values,
but in other cases the real data is not available, which means that is has to be
inferred from the available data.
The information on the zenith and azimuth solar angles can be computed exactly
if the position and elevation of the building are known. The GPS coordinates and
elevation information is found using a map~\cite{ElevationFinder}. With that
information available, the zenith, azimuth angles, as well as the angle of
incidence (AOI) are computed using the Python pvlib
library~\cite{f.holmgrenPvlibPythonPython2018}.
As opposed to the solar angles which can be computed exactly from the available
information, the Solar Radiation Components (DHI and DNI) have to be estimated
from the available measurements of GHI, zenith angles (Z) and datetime
information. \textcite{erbsEstimationDiffuseRadiation1982} present an empirical
relationship between GHI and the diffuse fraction DF and the ratio of GHI to
extraterrestrial irradiance $K_t$, known as the Erbs model. The DF is then used
to compute DHI and DNI as follows:
\begin{equation}
\begin{aligned}
\text{DHI} &= \text{DF} \times \text{GHI} \\
\text{DNI} &= \frac{\text{GHI} - \text{DHI}}{\cos{\text{Z}}}
\end{aligned}
\end{equation}
All the other parameters related to solar irradiance, such as the in-plane
irradiance components, in-plane diffuse irradiances from the sky and the ground
are computed using the Python pvlib.
The values that cannot be either calculated or approximated from the available
data, such as the precipitation, wind direction, incidence angles in place of
vertical and main/secondary surface axis have been replaced with the default
CARNOT placeholder value of -9999. The relative humidity, cloud index, pressure
and wind speed have been fixed to 50\%, 0.5, 96300 Pa, 0 $\text{m}/\text{s}$
respectively.
\subsubsection{\pdome\ and CARNOT model comparison}\label{sec:CARNOT_comparison}
With the WDB data compiled, we can now turn to simulating the CARNOT model and
compare its behaviour to that of the real \pdome\ building.
Unfortunately, one crucial piece of information is missing: the amount of heat
the HVAC either pumps in or takes out of the building at any point in time. This
value could be approximated from the information of electrical power consumption
and the EER, COP values given that it is known if the HVAC is in either heating
or cooling mode.
This information lacking in the existing experimental datasets, the heat
supplied/ taken out of the system has been inferred from the electrical energy
use, measured building temperature and HVAC temperature setpoint, with the
assumption that the HVAC is in cooling mode whenever the measurements are
higher than the setpoint temperature, and is in heating mode otherwise. As it
can already be seen in Figure~\ref{fig:Polydome_exp7_settemp}, this is a very
strong assumption, that is not necessarily always correct. It works well when
the measurements are very different from the sepoint, as is the case in the
first part of the experiment, but this assumption is false for the second part
of the experiment, where the sepoint temperature remains fixed and it is purely
the HVAC's job to regulate the temperature.
\begin{figure}[ht]
\centering
\includegraphics[width = \textwidth]{Plots/CARNOT_comparison_1.pdf}
\includegraphics[width = \textwidth]{Plots/CARNOT_comparison_2.pdf}
\caption{Measured vs CARNOT simulated temperature for the Experimental
Datasets}
\label{fig:CARNOT_simulation_validation}
\end{figure}
The results of the seven simulations are presented in
Figure~\ref{fig:CARNOT_simulation_validation}. Overall, the simulated
temperature has the same behaviour as the real \pdome\ measurements. A more
detailed inspection shows that for most of the experiments the simulated
temperature is much more volatile than the true measurements. This could be due
to an overestimated value of the Air Exchange Rate, underestimated amount of
furniture in the building, or, more probably, miscalculation of the HVAC's
heating/cooling mode. Of note is the large difference in behaviour for the
Experiments 5 and 6. In fact, for these experiments, the values for the
electical power consumption greatly differ in shape from the ones presented in
the other datasets, which could potentially mean erroneous measurements, or some
other underlying problem with the data.
Finally, it is possible to conclude that the CARNOT building behaves comparably
to the real \pdome\, even if not perfectly simulates its behaviour. These
differences could come from multiple factors, missing information that had to
be inferred and/or approximated, such as the Air Exchange Ratio, the heat
provided/extracted, the amount of furniture in the building, the overall shape
and size of the building, as well as possibly errors in the experimental data
used for validation. A more detailed analysis of the building parameters would
have to be done in order to find the reason and eliminate these discrepancies.
\clearpage

View file

@ -1,10 +1,256 @@
\section{Choice of Hyperparameters}
This section will discuss and try to validate the choice of all the
hyperparameters necessary for the training of a \acrshort{gp} model to capture
the CARNOT building's behaviour.
As described in Section~\ref{sec:gp_dynamical_system}, for the purpose of this
project the \acrlong{gp} model will be trained using the \acrshort{narx}
structure.
\subsection{Lengthscales}
\subsection{The Kernel}
\subsection{Loss functions}
\subsection{Lengthscales}
The most important metric for measuring the performance of a model is the value
of the loss function, computed on a dataset separate from the one used for
training.
There exist a number of different loss functions, each focusing on different
aspects of a model's performance. A selection of loss functions used in
identification of Gaussian Process models is presented
below~\cite{kocijanModellingControlDynamic2016}.
The \acrfull{rmse} is a very commonly used performance measure. As the name
suggests, it computes the root of the mean squared error:
\begin{equation}\label{eq:rmse}
\text{RMSE} = \sqrt{\frac{1}{N}\sum_{i=1}^N \left(y_i -
E(\hat{y}_i)\right)^{2}}
\end{equation}
This performance metric is very useful when training a model whose goal is
solely to minimize the difference between the values measured, and the ones
predicted by the model.
A variant of the \acrshort{mse} is the \acrfull{smse}, which normalizes the
\acrlong{mse} by the variance of the output values of the validation dataset.
\begin{equation}\label{eq:smse}
\text{SMSE} = \frac{1}{N}\frac{\sum_{i=1}^N \left(y_i -
E(\hat{y}_i)\right)^{2}}{\sigma_y^2}
\end{equation}
While the \acrshort{rmse} and the \acrshort{smse} are very good at ensuring the
predicted mean value of the Gaussian Process is close to the measured values of
the validation dataset, the confidence of the Gaussian Process prediction is
completely ignored. In this case two models predicting the same mean values, but
having very differnt confidence intervals would be equivalent according to these
performance metrics.
The \acrfull{lpd} is a performance metric which takes into account not only the
the mean value of the GP prediction, but the entire distribution:
\begin{equation}
\text{LPD} = \frac{1}{2} \ln{\left(2\pi\right)} + \frac{1}{2N}
\sum_{i=1}^N\left(\ln{\left(\sigma_i^2\right)} + \frac{\left(y_i -
E(\hat{y}_i)\right)^{2}}{\sigma_i^2}\right)
\end{equation}
where $\sigma_i^2$ is the model's output variance at the \textit{i}-th step.
The \acrshort{lpd} scales the error of the mean value prediction $\left(y_i -
E(\hat{y}_i)\right)^{2}$ by the variance $\sigma_i^2$. This means that the
overconfident models get penalized more than the more conservative models for
the same mean prediction error, leading to models that better represent
the real system.
The \acrfull{msll} is obtained by substacting the loss of the model that
predicts using a Gaussian with the mean $E(\boldsymbol{y})$ and variance
$\sigma_y^2$ of the measured data from the model \acrshort{lpd} and taking the
mean of the obtained result:
\begin{equation}
\text{MSLL} = \frac{1}{2N}\sum_{i=1}^N\left[
\ln{\left(\sigma_i^2\right) + \frac{\left(y_i -
E\left(\hat{y}_i\right)\right)^2}{\sigma_i^2}}
\right] - \frac{1}{2N}\sum_{i=1}^N\left[
\ln{\left(\sigma_y^2\right) + \frac{\left(y_i -
E\left(\boldsymbol{y}\right)\right)^2}{\sigma_y^2}}
\right]
\end{equation}
The \acrshort{msll} is approximately zero for simple models and negative for
better ones.
% TODO: [Hyperparameters] Explain loss table, difference in lags, etc.
\begin{table}[ht]
%\vspace{-8pt}
\centering
\begin{tabular}{||c c c|c c c c||}
\hline
\multicolumn{3}{||c|}{Lags} & \multicolumn{4}{c||}{Loss functions}\\
w & u & y & RMSE & SMSE & MSLL & LPD\\
\hline \hline
1 & 1 & 1 & 0.3464 & 0.36394 & 20.74 & 21.70 \\
1 & 1 & 2 & 0.1415 & 0.06179 & -9.62 & -8.67 \\
1 & 1 & 3 & 0.0588 & 0.01066 & -8.99 & -8.03 \\
1 & 2 & 1 & 0.0076 & 0.00017 & 71.83 & 72.79 \\
1 & 2 & 3 & \textbf{0.0041} & \textbf{0.00005} & 31.25 & 32.21 \\
2 & 1 & 2 & 0.1445 & 0.06682 & -9.57 & -8.61 \\
2 & 1 & 3 & 0.0797 & 0.02033 & -10.94 & -9.99 \\
3 & 1 & 3 & 0.0830 & 0.02219 & \textbf{-11.48} & \textbf{-10.53} \\
3 & 2 & 2 & 0.0079 & 0.00019 & 58.30 & 59.26 \\
\hline
\end{tabular}
\caption{GP Loss function values for different autoregressive lags}
\label{tab:GP_loss_functions}
\end{table}
\begin{table}[ht]
%\vspace{-8pt}
\centering
\resizebox{\columnwidth}{!}{%
\begin{tabular}{||c c c|c|c c c c c c c c c c c||}
\hline
\multicolumn{3}{||c|}{Lags} & Variance &\multicolumn{11}{c||}{Kernel
lengthscales relative importance} \\
w & u & y & $\sigma^2$ &$\lambda_{w1,1}$ & $\lambda_{w1,2}$ &
$\lambda_{w1,3}$ & $\lambda_{w2,1}$ & $\lambda_{w2,2}$ &
$\lambda_{w2,3}$ & $\lambda_{u1,1}$ & $\lambda_{u1,2}$ &
$\lambda_{y1,1}$ & $\lambda_{y1,2}$ & $\lambda_{y1,3}$\\
\hline \hline
1 & 1 & 1 & 0.11 & 0.721 & & & 2.633 & & & 0.569 & & 2.645 & & \\
1 & 1 & 2 & 22.68 & 0.222 & & & 0.751 & & & 0.134 & & 3.154 & 3.073
& \\ 1 & 1 & 3 & 0.29 & 0.294 & & & 1.303 & & & 0.356 & & 2.352
& 1.361 & 2.045 \\ 1 & 2 & 1 & 7.55 & 0.157 & & & 0.779 & & &
0.180 & 0.188 & 0.538 & & \\ 1 & 2 & 3 & 22925.40 & 0.018 & & &
0.053 & & & 0.080 & 0.393 & 0.665 & 0.668 & 0.018 \\ 2 & 1 & 2 & 31.53
& 0.010 & 0.219 & & 0.070 & 0.719 & & 0.123 & & 3.125 & 3.044 &
\\ 2 & 1 & 3 & 0.44 & 0.007 & 0.251 & & 0.279 & 1.229 & & 0.319
& & 2.705 & 1.120 & 2.510 \\ 3 & 1 & 3 & 0.56 & 0.046 &
0.064 & 0.243 & 0.288 & 1.151 & 0.233 & 0.302 & & 2.809 & 1.086 &
2.689 \\ 3 & 2 & 2 & 1.65 & 0.512 & 0.074 & 0.201 & 0.161 & 1.225
& 0.141 & 0.231 & 0.331 & 0.684 & 0.064 & \\
\hline
\end{tabular}%
}
\caption{GP hyperparameter values for different autoregressive lags}
\label{tab:GP_hyperparameters}
\end{table}
\begin{table}[ht]
%\vspace{-8pt}
\centering
\begin{tabular}{||c c c|c c c c||}
\hline
\multicolumn{3}{||c|}{Lags} & \multicolumn{4}{c||}{Loss functions}\\
w & u & y & RMSE & SMSE & MSLL & LPD\\
\hline \hline
1 & 1 & 1 & 0.3253 & 0.3203 & 228.0278 & 228.9843 \\
1 & 1 & 2 & 0.2507 & 0.1903 & 175.5525 & 176.5075 \\
1 & 1 & 3 & 0.1983 & 0.1192 & 99.7735 & 100.7268 \\
1 & 2 & 1 & 0.0187 & 0.0012 & -9.5386 & -8.5836 \\
1 & 2 & 3 & \textbf{0.0182} & \textbf{0.0011} & \textbf{-10.2739} &
\textbf{-9.3206} \\
2 & 1 & 2 & 0.2493 & 0.1884 & 165.0734 & 166.0284 \\
2 & 1 & 3 & 0.1989 & 0.1200 & 103.6753 & 104.6287 \\
3 & 1 & 3 & 0.2001 & 0.1214 & 104.4147 & 105.3681 \\
3 & 2 & 2 & 0.0206 & 0.0014 & -9.9360 & -8.9826 \\
\hline
\end{tabular}
\caption{SVGP Loss function values for different autoregressive lags}
\label{tab:SVGP_loss_functions}
\end{table}
\begin{table}[ht]
%\vspace{-8pt}
\centering
\resizebox{\columnwidth}{!}{%
\begin{tabular}{||c c c|c|c c c c c c c c c c c||}
\hline
\multicolumn{3}{||c|}{Lags} & Variance &\multicolumn{11}{c||}{Kernel
lengthscales relative importance} \\
w & u & y & $\sigma^2$ &$\lambda_{w1,1}$ & $\lambda_{w1,2}$ &
$\lambda_{w1,3}$ & $\lambda_{w2,1}$ & $\lambda_{w2,2}$ &
$\lambda_{w2,3}$ & $\lambda_{u1,1}$ & $\lambda_{u1,2}$ &
$\lambda_{y1,1}$ & $\lambda_{y1,2}$ & $\lambda_{y1,3}$\\
\hline \hline
1 & 1 & 1 & 0.2970 & 0.415 & & & 0.748 & & & 0.675 & & 0.680 & & \\
1 & 1 & 2 & 0.2717 & 0.430 & & & 0.640 & & & 0.687 & & 0.559 & 0.584 & \\
1 & 1 & 3 & 0.2454 & 0.455 & & & 0.589 & & & 0.671 & & 0.522 & 0.512 & 0.529 \\
1 & 2 & 1 & 0.2593 & 0.310 & & & 0.344 & & & 0.534 & 0.509 & 0.597 & & \\
1 & 2 & 3 & 0.2139 & 0.330 & & & 0.368 & & & 0.537 & 0.447 & 0.563 & 0.410 & 0.363 \\
2 & 1 & 2 & 0.2108 & 0.421 & 0.414 & & 0.519 & 0.559 & & 0.680 & & 0.525 & 0.568 & \\
2 & 1 & 3 & 0.1795 & 0.456 & 0.390 & & 0.503 & 0.519 & & 0.666 & & 0.508 & 0.496 & 0.516 \\
3 & 1 & 3 & 0.1322 & 0.432 & 0.370 & 0.389 & 0.463 & 0.484 & 0.491 & 0.666 & & 0.511 & 0.501 & 0.526 \\
3 & 2 & 2 & 0.1228 & 0.329 & 0.317 & 0.325 & 0.334 & 0.337 & 0.331 &
0.527 & 0.441 & 0.579 & 0.435 & \\
\hline
\end{tabular}%
}
\caption{SVGP hyperparameter values for different autoregressive lags}
\label{tab:SVGP_hyperparameters}
\end{table}
\clearpage
\subsection{Validation of hyperparameters}
% TODO: [Hyperparameters] Validation of hyperparameters
\subsubsection{Conventional Gaussian Process}
\begin{figure}[ht]
\centering
\includegraphics[width = \textwidth]{Plots/GP_113_training_performance.pdf}
\caption{}
\label{fig:GP_train_validation}
\end{figure}
\begin{figure}[ht]
\centering
\includegraphics[width = \textwidth]{Plots/GP_113_test_performance.pdf}
\caption{}
\label{fig:GP_test_validation}
\end{figure}
\begin{figure}[ht]
\centering
\includegraphics[width =
\textwidth]{Plots/GP_113_-1pts_test_prediction_20_steps.png}
\caption{}
\label{fig:GP_multistep_validation}
\end{figure}
\clearpage
\subsubsection{Sparse and Variational Gaussian Process}
\begin{figure}[ht]
\centering
\includegraphics[width = \textwidth]{Plots/SVGP_123_training_performance.pdf}
\caption{}
\label{fig:SVGP_train_validation}
\end{figure}
\begin{figure}[ht]
\centering
\includegraphics[width = \textwidth]{Plots/SVGP_123_test_performance.pdf}
\caption{}
\label{fig:SVGP_test_validation}
\end{figure}
\begin{figure}[ht]
\centering
\includegraphics[width =
\textwidth]{Plots/SVGP_123_test_prediction_20_steps.png}
\caption{}
\label{fig:SVGP_multistep_validation}
\end{figure}
\clearpage

View file

@ -1,4 +1,48 @@
\section{The MPC Problem}
The Optimal Control Problem to be solved was chosen in such a way as to make
analysis of the models' performances more straightforward. The objective is
tracking a defined reference temperature as close as possible, while ensuring
the heat input stays within the HVAC capacity. The \textit{zero-variance} method
is used for multi-step prediction when using an existing \acrshort{gp}. This
does ignore the propagation of uncertainty for multi step ahead prediction, but
even with its simplicity, this method has been proven to work
~\cite{kocijanModellingControlDynamic2016,jainLearningControlUsing2018,
pleweSupervisoryModelPredictive2020}.
The optimization problem is therefore defined as follows:
\begin{subequations}\label{eq:optimal_control_problem}
\begin{align}
& \text{minimize}
& & \sum_{i=0}^{N-1} (\bar{y}_{t+i} - y_{ref, t})^2 \\
& \text{subject to}
& & \bar{y}_{t+i} = K_*K^{-1}\mathbf{x}_{t+i-1} \\
&&& \mathbf{x}_{t+i-1} = \left[\mathbf{w}_{t+i-1},\quad
\mathbf{u}_{t+i-1},\quad \mathbf{y}_{t+i-1}\right]^T \\
\label{eq:components}
&&& u_{t+i} \in \mathcal{U}
\end{align}
\end{subequations}
where $y_{ref, t}$ is the reference temperature at time t, $\mathbf{x}_{t}$ is
the GP input vector at time t, composed of the exogenous autoregressive inputs
$\mathbf{w}_{t}$, the autoregressive controlled inputs $\mathbf{u}_{t}$ and the
autoregressive outputs $\mathbf{y}_{t}$.
\subsection{Temperature reference}
The temperature reference for the controller has been taken as the mean value of
the SIA~180:2014~\cite{sia180:2014ProtectionThermiqueProtection2014} temperature
norm. It imposes a range of temperatures that are allowed for residential
buildings based on the rolling 48h average outside temperature.
\begin{figure}[ht]
\centering
\includegraphics[width = \textwidth]{Images/sia_180_2014.png}
\caption{The SIA 180:2014 norm for residential building temperatures}
\label{fig:sia_temperature_norm}
\end{figure}
\clearpage

View file

@ -1,16 +1,52 @@
\section{Implementation}
\subsection{Simulink Model}
% TODO: Cite CARNOT
% TODO: [Implementation] Reference implementation details for CARNOT and WDB
\subsection{Gaussian Processes}
% TODO: [Implementation] Cite Tensorflow
% TODO: [Implementation] Cite GPflow
\subsection{Classical Gaussian Process training}
\subsection{Sparse and Variational Gaussian Process training}
\subsection{Tensorflow and GPflow}
\subsubsection{Test subsection}
% TODO: Cite Tensorflow
% TODO: Cite GPflow
\subsection{Optimal Control Problem}
% TODO: Cite CasADi
% TODO: Cite HSL solvers for using MA27
% TODO: [Implementation] Cite CasADi
% TODO: [Implementation] Cite HSL solvers for using MA27
\subsection{Sparse Implementation of the Optimization Problem}
The optimization problem as presented in
Equation~\ref{eq:optimal_control_problem} becomes very nonlinear quite fast. In
fact, due to the autoregressive structure of the \acrshort{gp}, the predicted
temperature at time t is passed as an input to the model at time $t+1$. A simple
recursive implementation of the Optimization Problem becomes untractable after
only 3 --- 4 prediction steps.
In order to solve this problem, a new OCP is introduced. It has a much sparser
structure, in exchange for a larger number of variables. This turns out to be
much faster to solve than the original problem.
Let $w_l$, $u_l$, and $y_l$ be the lengths of the state vector components
$\mathbf{w}$, $\mathbf{u}$, $\mathbf{y}$ (cf. Equation~\ref{eq:components}).
\begin{subequations}\label{eq:sparse_optimal_control_problem}
\begin{align}
& \text{minimize}
& & \sum_{i=2}^{N + 1} \left(X[i, w_l + u_l + 1] - y_{ref, t}\right)^2 \\
& \text{subject to}
& & X[i+1, w_l + u_l + 1] = K_*K^{-1}X[i, :] \quad \text{for} \quad
i\in[1, N]\\
&&& X[i, w_l + u_l + 2: ] = X[i, w_l+ u_l + 1: w_l + u_l + y_l - 1]\\
&&& X[i, 1:w_l] = W[i, :] \\
&&& X[i+1, w_l + 2: w_l + u_l] = X[i, w_l + 1: w_l + u_l - 1] \\
&&& X[:, w_l + 1] \in \mathcal{U}
\end{align}
\end{subequations}
where X is the matrix of all the system states and W is the matrix of the
disturbances.
\subsection{Python server and controller objects}

View file

@ -2,8 +2,89 @@
\subsection{Conventional Gaussian Processes}
\begin{figure}[ht]
\centering
\includegraphics[width =
\textwidth]{Plots/4_GP_480pts_12_averageYear_fullyear.pdf}
\caption{GP full year simulation}
\label{fig:GP_fullyear_simulation}
\end{figure}
\begin{figure}[ht]
\centering
\includegraphics[width =
\textwidth]{Plots/4_GP_480pts_12_averageYear_first_model_performance.pdf}
\caption{GP first model performance}
\label{fig:GP_first_model_performance}
\end{figure}
\clearpage
\subsection{Adaptive scheme with SVGP}
\subsubsection{RENAME ME- All data}
\begin{figure}[ht]
\centering
\includegraphics[width =
\textwidth]{Plots/1_SVGP_480pts_inf_window_12_averageYear_fullyear.pdf}
\caption{SVGP full year simulation}
\label{fig:SVGP_fullyear_simulation}
\end{figure}
\begin{figure}[ht]
\centering
\includegraphics[width =
\textwidth]{Plots/1_SVGP_480pts_inf_window_12_averageYear_first_model_performance.pdf}
\caption{GP first model performance}
\label{fig:SVGP_first_model_performance}
\end{figure}
\begin{figure}[ht]
\centering
\includegraphics[width =
\textwidth]{Plots/1_SVGP_480pts_inf_window_12_averageYear_later_model_performance.pdf}
\caption{GP later model performance}
\label{fig:SVGP_later_model_performance}
\end{figure}
\begin{figure}[ht]
\centering
\includegraphics[width =
\textwidth]{Plots/1_SVGP_480pts_inf_window_12_averageYear_last_model_performance.pdf}
\caption{GP last model performance}
\label{fig:SVGP_last_model_performance}
\end{figure}
\clearpage
\subsubsection{RENAME ME- 480pts window}
\begin{figure}[ht]
\centering
\includegraphics[width =
\textwidth]{Plots/5_SVGP_480pts_480pts_window_12_averageYear_fullyear.pdf}
\caption{SVGP full year simulation}
\label{fig:SVGP_480window_fullyear_simulation}
\end{figure}
\clearpage
\subsubsection{RENAME ME- 96pts starting data}
\begin{figure}[ht]
\centering
\includegraphics[width =
\textwidth]{Plots/6_SVGP_96pts_inf_window_12_averageYear_fullyear.pdf}
\caption{SVGP full year simulation}
\label{fig:SVGP_96pts_fullyear_simulation}
\end{figure}
\clearpage
\subsection{Qualitative analysis}
\subsection{Quantitative analysis}
\clearpage

Binary file not shown.

After

Width:  |  Height:  |  Size: 692 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 460 KiB

BIN
Images/polydome_python.pdf Executable file

Binary file not shown.

BIN
Images/polydome_room_model.pdf Executable file

Binary file not shown.

Binary file not shown.

After

Width:  |  Height:  |  Size: 718 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 724 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.6 MiB

BIN
Images/sia_180_2014.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 49 KiB

Binary file not shown.

Binary file not shown.

Binary file not shown.

BIN
Plots/Exp_settemp.pdf Normal file

Binary file not shown.

BIN
Plots/Fan_baseline.pdf Normal file

Binary file not shown.

Binary file not shown.

After

Width:  |  Height:  |  Size: 79 KiB

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

After

Width:  |  Height:  |  Size: 98 KiB

Binary file not shown.

35
glossaries.tex Normal file
View file

@ -0,0 +1,35 @@
% Glossary entries
% Acronyms
\newacronym{dni}{DNI}{Direct Normal Irradiance}
\newacronym{dhi}{DHI}{Diffuse Horizontal Irradiance}
\newacronym{ghi}{GHI}{Global Horizontal Irradiance}
\newacronym{df}{DF}{Diffuse Fraction}
\newacronym{aoi}{AOI}{Angle of incidence}
\newacronym{poa}{POA}{In-plane irradiance}
\newacronym{mse}{MSE}{Mean-Squared Error}
\newacronym{rmse}{RMSE}{Root Mean-Squared Error}
\newacronym{smse}{SMSE}{Standardised Mean-Squared Error}
\newacronym{lpd}{LPD}{Log Predictive Density Error}
\newacronym{msll}{MSLL}{Mean Standardised Log Loss}
\newacronym{gp}{GP}{Gaussian Process}
\newacronym{gpr}{GPR}{Gaussian Process Regression}
\newacronym{svgp}{SVGP}{Sparse and Variational Gaussian Process}
\newacronym{elbo}{ELBO}{Evidence Lower Bound}
\newacronym{kl}{KL}{Kullback-Leibler}
\newacronym{se}{SE}{Squared Exponential}
\newacronym{rq}{RQ}{Rational Quadratic}
\newacronym{ard}{ARD}{Automatic Relevance Determination}
\newacronym{nfir}{NFIR}{Nonlinear Finite Impulse Response}
\newacronym{narx}{NARX}{Nonlinear autoregressive model with exogeous input}
\newacronym{noe}{NOE}{Nonlinear output error}
\newacronym{narmax}{NARMAX}{Nonlinear autoregressive and moving average model
with exogenous input}

View file

@ -16,10 +16,17 @@
\usepackage[T1]{fontenc}
\usepackage[english]{babel}
\usepackage[utf8]{inputenc}
\usepackage{libertinus} % Use font of arbitrary sizes
% (removes bibliography font warning)
\usepackage{csquotes} % Removes a compilation warning
\usepackage{changepage}
\usepackage{lipsum}
\usepackage[acronym]{glossaries}
\makenoidxglossaries
\loadglsentries[acronym]{glossaries.tex}
%SI Package
\usepackage[binary-units=true]{siunitx}
@ -39,7 +46,14 @@
\usepackage{mathtools}
\usepackage{textcomp} %defines \perthousand and \micro for gensymb
\usepackage{gensymb} %math in text
\DeclarePairedDelimiter{\norm}{\lVert}{\rVert}
\usepackage[unicode=true,pdfusetitle,
bookmarks=true,
bookmarksnumbered=false,
bookmarksopen=false,
breaklinks=false,
pdfborder={0 0 1},
backref=false,
colorlinks=false]{hyperref}
%\usepackage[colorinlistoftodos]{todonotes}
%\usepackage[colorlinks=true, allcolors=black]{hyperref}
@ -96,6 +110,10 @@
\setlength\parindent{0pt}
% Define new user commands
\newcommand{\pdome}{Polyd\^ome}
\DeclarePairedDelimiter{\norm}{\lVert}{\rVert}
\begin{document}
\selectlanguage{english}
@ -104,6 +122,9 @@
\tableofcontents
\clearpage
\printnoidxglossary[type=\acronymtype]
\clearpage
\input{10_Introduction.tex}
\input{20_Previous_Research.tex}
\input{30_Gaussian_Processes_Background.tex}
@ -114,4 +135,5 @@
\input{80_Results.tex}
\input{90_Further_Research.tex}
\input{95_Conclusion.tex}
\printbibliography
\end{document}

View file

@ -0,0 +1,434 @@
@misc{aermecRoofTopManuelSelection,
title = {Roof-{{Top Manuel}} de Selection et {{Installation}}},
author = {{AERMEC}},
file = {/home/radu/Zotero/storage/8H7BWC7Z/manuel technique RTY.PDF}
}
@article{anderssonCasADiSoftwareFramework2019,
title = {{{CasADi}}: A Software Framework for Nonlinear Optimization and Optimal Control},
shorttitle = {{{CasADi}}},
author = {Andersson, Joel A. E. and Gillis, Joris and Horn, Greg and Rawlings, James B. and Diehl, Moritz},
date = {2019-03},
journaltitle = {Mathematical Programming Computation},
shortjournal = {Math. Prog. Comp.},
volume = {11},
pages = {1--36},
issn = {1867-2949, 1867-2957},
doi = {10.1007/s12532-018-0139-4},
url = {http://link.springer.com/10.1007/s12532-018-0139-4},
urldate = {2021-06-03},
file = {/home/radu/Zotero/storage/4HZ595JR/Andersson et al. - 2019 - CasADi a software framework for nonlinear optimiz.pdf},
langid = {english},
number = {1}
}
@article{arroyoPythonBasedToolboxModel2018,
title = {A {{Python}}-{{Based Toolbox}} for {{Model Predictive Control Applied}} to {{Buildings}}},
author = {Arroyo, Javier},
date = {2018},
pages = {14},
abstract = {The use of Model Predictive Control (MPC) in Building Management Systems (BMS) has proven to outperform the traditional Rule-Based Controllers (RBC). These optimal controllers are able to minimize the energy use within building, by taking into account the weather forecast and occupancy profiles, while guaranteeing thermal comfort in the building. To this end, they anticipate the dynamic behaviour based on a mathematical model of the system. However, these MPC strategies are still not widely used in practice because a substantial engineering effort is needed to identify a tailored model for each building and Heat Ventilation and Air Conditioning (HVAC) system.},
file = {/home/radu/Zotero/storage/UC4XY3WZ/Arroyo - 2018 - A Python-Based Toolbox for Model Predictive Contro.pdf},
langid = {english}
}
@online{BuildingsHeatTransferData,
title = {Buildings.{{HeatTransfer}}.{{Data}}.{{Solids}}},
url = {https://simulationresearch.lbl.gov/modelica/releases/latest/help/Buildings_HeatTransfer_Data_Solids.html#Buildings.HeatTransfer.Data.Solids},
urldate = {2021-06-04},
file = {/home/radu/Zotero/storage/UWE74DAN/Buildings_HeatTransfer_Data_Solids.html}
}
@online{CARNOTManual,
title = {{{CARNOT}} Manual},
url = {https://www.fh-aachen.de/fileadmin/ins/ins_solar_institut/Carnot_Downloads/Manual.html},
urldate = {2021-06-11},
file = {/home/radu/Zotero/storage/JHEQ88EG/Manual.html}
}
@online{doubleglazingDoubleGlazingValue,
title = {Double {{Glazing U Value Explained}}},
author = {Doubleglazing, Admin},
url = {https://www.doubleglazing.com/windows/double-glazing-materials/double-glazing-u-value-explained/},
urldate = {2021-06-08},
abstract = {What is the U value? Double glazing can be measured on its enery efficiency whilst comparing it to other windows, by using a calculation system known as the U value (or U factor). The U value can be adopted for any kind of home construction and measures how effective that component is at retaining},
file = {/home/radu/Zotero/storage/ISYJ8QVT/double-glazing-u-value-explained.html},
langid = {british},
organization = {{Double Glazing.com}}
}
@online{ElevationFinder,
title = {Elevation {{Finder}}},
url = {https://www.freemaptools.com/elevation-finder.htm},
urldate = {2021-06-11},
file = {/home/radu/Zotero/storage/T9PQB6EC/elevation-finder.html}
}
@article{erbsEstimationDiffuseRadiation1982,
title = {Estimation of the Diffuse Radiation Fraction for Hourly, Daily and Monthly-Average Global Radiation},
author = {Erbs, D. G. and Klein, S. A. and Duffie, J. A.},
date = {1982-01-01},
journaltitle = {Solar Energy},
shortjournal = {Solar Energy},
volume = {28},
pages = {293--302},
issn = {0038-092X},
doi = {10.1016/0038-092X(82)90302-4},
url = {https://www.sciencedirect.com/science/article/pii/0038092X82903024},
urldate = {2021-06-11},
abstract = {Hourly pyrheliometer and pyranometer data from four U.S. locations are used to establish a relationship between the hourly diffuse fraction and the hourly clearness index kT. This relationship is compared to the relationship established by Orgill and Hollands and to a set of data from Highett, Australia, and agreement is within a few percent in both cases. The transient simulation program TRNSYS is used to calculate the annual performance of solar energy systems using several correlations. For the systems investigated, the effect of simulating the random distribution of the hourly diffuse fraction is negligible. A seasonally dependent daily diffuse correlation is developed from the data, and this daily relationship is used to derive a correlation for the monthly-average diffuse fraction.},
file = {/home/radu/Zotero/storage/BN2KS23L/0038092X82903024.html},
langid = {english},
number = {4}
}
@article{f.holmgrenPvlibPythonPython2018,
title = {Pvlib Python: A Python Package for Modeling Solar Energy Systems},
shorttitle = {Pvlib Python},
author = {F. Holmgren, William and W. Hansen, Clifford and A. Mikofski, Mark},
date = {2018-09-07},
journaltitle = {Journal of Open Source Software},
shortjournal = {JOSS},
volume = {3},
pages = {884},
issn = {2475-9066},
doi = {10.21105/joss.00884},
url = {http://joss.theoj.org/papers/10.21105/joss.00884},
urldate = {2021-06-11},
file = {/home/radu/Zotero/storage/DGWSAEYK/F. Holmgren et al. - 2018 - pvlib python a python package for modeling solar .pdf},
number = {29}
}
@article{fabiettiMultitimeScaleCoordination2018,
title = {Multi-Time Scale Coordination of Complementary Resources for the Provision of Ancillary Services},
author = {Fabietti, Luca and Qureshi, Faran A. and Gorecki, Tomasz T. and Salzmann, Christophe and Jones, Colin N.},
date = {2018-11},
journaltitle = {Applied Energy},
shortjournal = {Applied Energy},
volume = {229},
pages = {1164--1180},
issn = {03062619},
doi = {10.1016/j.apenergy.2018.08.045},
url = {https://linkinghub.elsevier.com/retrieve/pii/S0306261918312005},
urldate = {2019-06-09},
abstract = {This paper presents a predictive control scheme for coordinating a set of heterogeneous and complementary resources at different timescales for the provision of ancillary services. In particular, we combine building thermodynamics (slow), and energy storage systems (fast resources) to augment the flexibility that can be provided to the grid compared to the flexibility that any of these resources can provide individually. A multilevel control scheme based on data-based robust optimization methods is developed that enables heterogeneous resources at different time scales (slow and fast) to provide fast regulation services, especially a secondary frequency control service. A data-based predictor is developed to forecast the future regulation signal and is used to improve the performance of the controller in real-time operation. The proposed control method is used to conduct experiments, for nine consecutive days, demonstrating the provision of secondary frequency control fully complying to the Swiss regulations, using a controllable building cooling system on the EPFL campus and an emulated grid-connected energy storage system. The experimental results show that optimally combining such slow and fast resources can significantly augment the flexibility that can be provided to the grid. To the best of authors knowledge, this work is the first experimental demonstration of coordinating heterogeneous demandresponse to provide secondary frequency control service.},
file = {/home/radu/Zotero/storage/HX6FBTNX/Fabietti et al. - 2018 - Multi-time scale coordination of complementary res.pdf},
langid = {english}
}
@article{garrido-merchanDealingCategoricalIntegervalued2020,
title = {Dealing with {{Categorical}} and {{Integer}}-Valued {{Variables}} in {{Bayesian Optimization}} with {{Gaussian Processes}}},
author = {Garrido-Merchán, Eduardo C. and Hernández-Lobato, Daniel},
date = {2020-03},
journaltitle = {Neurocomputing},
shortjournal = {Neurocomputing},
volume = {380},
pages = {20--35},
issn = {09252312},
doi = {10.1016/j.neucom.2019.11.004},
url = {http://arxiv.org/abs/1805.03463},
urldate = {2021-06-03},
abstract = {Bayesian Optimization (BO) is useful for optimizing functions that are expensive to evaluate, lack an analytical expression and whose evaluations can be contaminated by noise. These methods rely on a probabilistic model of the objective function, typically a Gaussian process (GP), upon which an acquisition function is built. The acquisition function guides the optimization process and measures the expected utility of performing an evaluation of the objective at a new point. GPs assume continuous input variables. When this is not the case, for example when some of the input variables take categorical or integer values, one has to introduce extra approximations. Consider a suggested input location taking values in the real line. Before doing the evaluation of the objective, a common approach is to use a one hot encoding approximation for categorical variables, or to round to the closest integer, in the case of integer-valued variables. We show that this can lead to optimization problems and describe a more principled approach to account for input variables that are categorical or integer-valued. We illustrate in both synthetic and a real experiments the utility of our approach, which significantly improves the results of standard BO methods using Gaussian processes on problems with categorical or integer-valued variables.},
archiveprefix = {arXiv},
eprint = {1805.03463},
eprinttype = {arxiv},
file = {/home/radu/Zotero/storage/UQSNGFSC/Garrido-Merchán and Hernández-Lobato - 2020 - Dealing with Categorical and Integer-valued Variab.pdf},
keywords = {Computer Science - Machine Learning,Statistics - Machine Learning},
langid = {english}
}
@online{glassforeuropeMinimumPerformanceRequirements2018,
title = {Minimum {{Performance Requirements}} for {{Windows}}},
author = {{Glass For Europe}},
date = {2018-12-19T09:45:24+00:00},
url = {https://glassforeurope.com/minimum-performance-requirements-for-windows/},
urldate = {2021-06-08},
abstract = {Minimum performance requirements~for window replacement in the~residential sector ~ As required by the Energy Performance of Buildings Directive, EU Member States have to set cost-optimal minimum energy performance requirements for the replacement of building elements such as windows. The map and the table below illustrate how EU Member States have implemented this 2012 EPBD requirement},
file = {/home/radu/Zotero/storage/XK6LLKBM/minimum-performance-requirements-for-windows.html},
langid = {american},
organization = {{Glass for Europe}}
}
@online{GoogleMaps,
title = {Google {{Maps}}},
url = {https://www.google.com/maps/place/46%C2%B031'17.4%22N+6%C2%B034'08.3%22E/@46.5214889,6.5688311,21z/data=!4m6!3m5!1s0x478c3101c29a8bf5:0x628d91034302f8be!7e2!8m2!3d46.5214889!4d6.5689679},
urldate = {2021-06-04},
abstract = {Find local businesses, view maps and get driving directions in Google Maps.},
file = {/home/radu/Zotero/storage/9M2GL5H6/data=!4m6!3m5!1s0x478c3101c29a8bf50x628d91034302f8be!7e2!8m2!3d46.5214889!4d6.html},
langid = {english},
organization = {{Google Maps}}
}
@article{GuideEnergyEfficientWindows,
title = {Guide to {{Energy}}-{{Efficient Windows}}},
journaltitle = {US Department of Energy},
pages = {2},
file = {/home/radu/Zotero/storage/UEKFZDDC/Guide to Energy-Efficient Windows.pdf},
langid = {english}
}
@inproceedings{hensman2014scalable,
title = {Scalable Variational Gaussian Process Classification},
booktitle = {Proceedings of {{AISTATS}}},
author = {Hensman, James and Matthews, Alexander G. de G. and Ghahramani, Zoubin},
date = {2015}
}
@online{hensmanGaussianProcessesBig2013,
title = {Gaussian {{Processes}} for {{Big Data}}},
author = {Hensman, James and Fusi, Nicolo and Lawrence, Neil D.},
date = {2013-09-26},
url = {http://arxiv.org/abs/1309.6835},
urldate = {2021-06-15},
abstract = {We introduce stochastic variational inference for Gaussian process models. This enables the application of Gaussian process (GP) models to data sets containing millions of data points. We show how GPs can be vari- ationally decomposed to depend on a set of globally relevant inducing variables which factorize the model in the necessary manner to perform variational inference. Our ap- proach is readily extended to models with non-Gaussian likelihoods and latent variable models based around Gaussian processes. We demonstrate the approach on a simple toy problem and two real world data sets.},
archiveprefix = {arXiv},
eprint = {1309.6835},
eprinttype = {arxiv},
file = {/home/radu/Zotero/storage/HALCX455/Hensman et al. - 2013 - Gaussian Processes for Big Data.pdf;/home/radu/Zotero/storage/KNNUVW7T/1309.html},
keywords = {Computer Science - Machine Learning,Statistics - Machine Learning},
primaryclass = {cs, stat}
}
@software{holmgrenPvlibPvlibpythonV02021,
title = {Pvlib/Pvlib-Python: V0.8.1},
shorttitle = {Pvlib/Pvlib-Python},
author = {Holmgren, Will and Calama-Consulting and Hansen, Cliff and Mikofski, Mark and Anderson, Kevin and Lorenzo, Tony and Krien, Uwe and Bmu and Stark, Cameron and DaCoEx and Driesse, Anton and Konstant\_t and Mayudong and Peque, Miguel Sánchez De León and Heliolytics and Miller, Ed and Anoma, Marc A. and Guo, Veronica and Boeman, Leland and Jforbess and Lunel, Tanguy and Morgan, Alexander and Stein, Joshua and Leroy, Cedric and Ahan M R and JPalakapillyKWH and Dollinger, Johannes and Anderson, Kevin and MLEEFS and Dowson, Oscar},
date = {2021-01-05},
doi = {10.5281/ZENODO.4417742},
url = {https://zenodo.org/record/4417742},
urldate = {2021-06-11},
abstract = {https://pvlib-python.readthedocs.io/en/v0.8.1/whatsnew.html},
organization = {{Zenodo}},
version = {v0.8.1}
}
@online{HSLCollectionFortran,
title = {{{HSL}}, a Collection of {{Fortran}} Codes for Large-Scale Scientific Computation. {{See}} {{http://www.hsl.rl.ac.uk/}}},
url = {https://www.hsl.rl.ac.uk/},
urldate = {2021-06-03},
file = {/home/radu/Zotero/storage/AU982UBT/www.hsl.rl.ac.uk.html}
}
@inproceedings{jainLearningControlUsing2018,
title = {Learning and {{Control Using Gaussian Processes}}},
booktitle = {2018 {{ACM}}/{{IEEE}} 9th {{International Conference}} on {{Cyber}}-{{Physical Systems}} ({{ICCPS}})},
author = {Jain, Achin and Nghiem, Truong and Morari, Manfred and Mangharam, Rahul},
date = {2018-04},
pages = {140--149},
publisher = {{IEEE}},
location = {{Porto}},
doi = {10.1109/ICCPS.2018.00022},
url = {https://ieeexplore.ieee.org/document/8443729/},
urldate = {2021-06-03},
abstract = {Building physics-based models of complex physical systems like buildings and chemical plants is extremely cost and time prohibitive for applications such as real-time optimal control, production planning and supply chain logistics. Machine learning algorithms can reduce this cost and time complexity, and are, consequently, more scalable for large-scale physical systems. However, there are many practical challenges that must be addressed before employing machine learning for closed-loop control. This paper proposes the use of Gaussian Processes (GP) for learning control-oriented models: (1) We develop methods for the optimal experiment design (OED) of functional tests to learn models of a physical system, subject to stringent operational constraints and limited availability of the system. Using a Bayesian approach with GP, our methods seek to select the most informative data for optimally updating an existing model. (2) We also show that black-box GP models can be used for receding horizon optimal control with probabilistic guarantees on constraint satisfaction through chance constraints. (3) We further propose an online method for continuously improving the GP model in closed-loop with a real-time controller. Our methods are demonstrated and validated in a case study of building energy control and Demand Response.},
eventtitle = {2018 {{ACM}}/{{IEEE}} 9th {{International Conference}} on {{Cyber}}-{{Physical Systems}} ({{ICCPS}})},
file = {/home/radu/Zotero/storage/89X86H5F/Jain et al. - 2018 - Learning and Control Using Gaussian Processes.pdf},
isbn = {978-1-5386-5301-2},
langid = {english}
}
@article{johraNumericalAnalysisImpact2017,
title = {Numerical {{Analysis}} of the {{Impact}} of {{Thermal Inertia}} from the {{Furniture}} / {{Indoor Content}} and {{Phase Change Materials}} on the {{Building Energy Flexibility}}},
author = {Johra, Hicham and Heiselberg, Per Kvols and Dréau, Jérôme Le},
date = {2017},
pages = {8},
abstract = {Many numerical models for building energy simulation assume empty rooms and do not account for the indoor content of occupied buildings. Furnishing elements and indoor items have complicated shapes and are made of various materials. Therefore, most of the people prefer to ignore them. However, this simplification can be problematic for accurate calculation of the transient indoor temperature. This article firstly reviews different solutions to include the indoor content in building models and suggests typical values for its characteristics. Secondly, the paper presents the results of a numerical study investigating the influence of the different types of thermal inertia on buildings energy flexibility. Although the insulation level and thermal mass of a building envelope are the dominant parameters, it appears that indoor content cannot be neglected for lightweight structure building simulations. Finally, it is shown that the integration of phase change materials in wallboards or furniture elements can appreciably improve the energy flexibility of buildings.},
file = {/home/radu/Zotero/storage/9KFSNVNW/Johra et al. - 2017 - Numerical Analysis of the Impact of Thermal Inerti.pdf},
langid = {english}
}
@online{KernelCookbooka,
title = {Kernel {{Cookbook}}},
url = {https://www.cs.toronto.edu/~duvenaud/cookbook/},
urldate = {2021-06-15},
file = {/home/radu/Zotero/storage/LTPBRTNG/cookbook.html}
}
@software{kimballGIMPGNUImage,
title = {{{GIMP}}: {{GNU Image Manipulation Program}}},
author = {Kimball, Spencer and Mattis, Peter and {The GIMP Development Team}},
url = {https://www.gimp.org},
version = {2.10.24}
}
@book{kocijanModellingControlDynamic2016,
title = {Modelling and {{Control}} of {{Dynamic Systems Using Gaussian Process Models}}},
author = {Kocijan, Juš},
date = {2016},
publisher = {{Springer International Publishing}},
location = {{Cham}},
doi = {10.1007/978-3-319-21021-6},
url = {http://link.springer.com/10.1007/978-3-319-21021-6},
urldate = {2019-06-10},
file = {/home/radu/Zotero/storage/YW9S25QI/Kocijan - 2016 - Modelling and Control of Dynamic Systems Using Gau.pdf},
isbn = {978-3-319-21020-9 978-3-319-21021-6},
series = {Advances in {{Industrial Control}}}
}
@article{liuUnderstandingComparingScalable2019,
title = {Understanding and Comparing Scalable {{Gaussian}} Process Regression for Big Data},
author = {Liu, Haitao and Cai, Jianfei and Ong, Yew-Soon and Wang, Yi},
date = {2019-01},
journaltitle = {Knowledge-Based Systems},
shortjournal = {Knowledge-Based Systems},
volume = {164},
pages = {324--335},
issn = {09507051},
doi = {10.1016/j.knosys.2018.11.002},
url = {http://linkinghub.elsevier.com/retrieve/pii/S0950705118305380},
urldate = {2021-06-15},
file = {/home/radu/Zotero/storage/B3I5KT7W/0307ea49eae1cbe24736d3e77e33fcd8.pdf;/home/radu/Zotero/storage/V6BDDI6C/Liu et al. - 2019 - Understanding and comparing scalable Gaussian proc.pdf},
langid = {english}
}
@article{lohmannEinfuehrungSoftwareMATLAB,
title = {Einführung in die Software MATLAB® - Simulink® und die Toolboxen CARNOT und Stateflow® zur Simulation von Gebäude- und Heizungstechnik},
author = {Lohmann, Sandra},
pages = {43},
file = {/home/radu/Zotero/storage/EXHDUWYA/Lohmann - Einführung in die Software MATLAB® - Simulink® und.pdf},
langid = {german}
}
@article{matthewsGPflowGaussianProcess2017,
title = {{{GPflow}}: {{A Gaussian Process Library}} Using {{TensorFlow}}},
shorttitle = {{{GPflow}}},
author = {Matthews, Alexander G. de G. and van der Wilk, Mark and Nickson, Tom and Fujii, Keisuke and Boukouvalas, Alexis and Le\{\textbackslash 'o\}n-Villagr\{\textbackslash 'a\}, Pablo and Ghahramani, Zoubin and Hensman, James},
date = {2017},
journaltitle = {Journal of Machine Learning Research},
volume = {18},
pages = {1--6},
issn = {1533-7928},
url = {http://jmlr.org/papers/v18/16-537.html},
urldate = {2021-06-03},
file = {/home/radu/Zotero/storage/VI8IWMWL/Matthews et al. - 2017 - GPflow A Gaussian Process Library using TensorFlo.pdf;/home/radu/Zotero/storage/S6WSAX5Z/16-537.html},
number = {40}
}
@article{nattererModelingMultilayerBeam2008,
title = {Modeling of Multi-Layer Beam with Interlayer Slips},
author = {Natterer, Johannes and Weinand, Yves},
date = {2008-01-01},
journaltitle = {Structures and Architecture - Proceedings of the 1st International Conference on Structures and Architecture, ICSA 2010},
shortjournal = {Structures and Architecture - Proceedings of the 1st International Conference on Structures and Architecture, ICSA 2010},
volume = {4},
issn = {978-0-415-49249-2},
doi = {10.1201/b10428-152},
abstract = {In the early 90th the IBOIS-EPFL developed a new kind of shell structure. The ribs were made with simple planks which are waved together to build a spacial ribbed shell. The first application was the Polydome in 1993, and the most fascinating has been the Expodach in Hannover in year 2000 [1]. However, the calculation and the realization of such structures requires particular knowledge and experience. That iss why the construction of such structures is something exceptional. In addition to the anisotropy of material wood, the spatial structure of laminated and screwed beams has a structural anisotropy. They constitute highly unspecified static systems. Currently the engineer does not have any effective method to calculate these kinds of spatial structures, made out of curved screwed lamellate boards. The existing approximations for complex curved structures are not satisfying. The main differences are noticed especially upon the analysis of the stability of structures subject to horizontal loads. The following article will compare a 6-layered beam with inter-layer slips in different load cases and situations. The beam is composed of 6 planks with a section of 140/27mm. The connections between the layers are screws. The studied parameters are the distance between the connector, the length of the beam and 3 different load cases. A total of 24 elements have been tested and compared between the laboratory test and different theories. A very important parameter is the stiffness of the connector. Thus additional tests have been made to simulate the stiffness of the screws used. A bi-exponential law has been generated to be used in the theoretical evaluation of the tests. The actual theories which have been used to simulate the behavior of the sample are gamma- Method of Möhler-Schelling, Appendix F of the Germand standard E- DIN 1052 of Prof. Kreuzinger, framework systems developed by Kneidl and Hartmann and finally a multiy-layer finite element developed at the LSC- EPFL by Prof. Frei and Dr. Krawczyk.},
file = {/home/radu/Zotero/storage/TJE2V99Y/Natterer and Weinand - 2008 - Modeling of multi-layer beam with interlayer slips.pdf}
}
@article{nattererPolydomeTimberShell1993,
title = {Polydôme: {{A Timber Shell}}},
shorttitle = {Polydôme},
author = {Natterer, Julius and MacIntyre, John},
date = {1993-05-01},
journaltitle = {Structural Engineering International},
volume = {3},
pages = {82--83},
publisher = {{Taylor \& Francis}},
issn = {1016-8664},
doi = {10.2749/101686693780612376},
url = {https://doi.org/10.2749/101686693780612376},
urldate = {2021-06-04},
abstract = {The “Polydôme” is a 25 m-span timber shell exhibition built to commemorate the seven hundredth anniversary of the Swiss Confederation. The project was an opportunity to present concrete evidence or research, as well as to demonstrate the possibilities of engineered timber structures using simple material means.},
annotation = {\_eprint: https://doi.org/10.2749/101686693780612376},
file = {/home/radu/Zotero/storage/7LIEDD9S/Natterer and MacIntyre - 1993 - Polydôme A Timber Shell, Switzerland.pdf;/home/radu/Zotero/storage/MZ3EIFC4/101686693780612376.html},
keywords = {Decking,Polydôme (Switzerland),Roof systems,Structural analysis,Timber shells,Timber structures},
number = {2}
}
@online{pinterestSphericalDomeCalculator,
title = {Spherical {{Dome Calculator}}},
author = {this on Pinterest, Dave South Share this via Email Share this on Twitter Share this on Facebook Share this on Reddit Share},
url = {https://monolithicdome.com/spherical-dome-calculator},
urldate = {2021-06-05},
abstract = {The MDI Spherical Dome Calculator assists in calculating common design elements of a partial sphere set on an optional stem wall. It helps with quick design ideas as well as provides accurate measurements when finalizing structural elements. Outputs include surface area, volume, circumference, and distances along and around the various details of the dome design.},
file = {/home/radu/Zotero/storage/SVQGWWW9/spherical-dome-calculator.html},
langid = {english},
organization = {{Monolithic Dome Institute}}
}
@inproceedings{pleweSupervisoryModelPredictive2020,
title = {A {{Supervisory Model Predictive Control Framework}} for {{Dual Temperature Setpoint Optimization}}},
booktitle = {2020 {{American Control Conference}} ({{ACC}})},
author = {Plewe, Kaden E. and Smith, Amanda D. and Liu, Mingxi},
date = {2020-07},
pages = {1900--1906},
issn = {2378-5861},
doi = {10.23919/ACC45564.2020.9147308},
abstract = {In this paper, a model predictive control (MPC) framework for building energy system setpoint optimization is developed and tested. The performance of the MPC framework is presented in comparison to a baseline case, where a fixed setpoint schedule is used. To simulate the MPC procedure, an EnergyPlus building model is used to represent the actual building that the optimal setpoints are applied to, and a Gaussian process (GP) regression meta-model is used in the MPC procedure that generates the optimal setpoints. The performance outputs that are used for evaluation are total heating, ventilation and air conditioning (HVAC) energy usage and the Fanger predicted mean vote (PMV) thermal comfort measure. The inputs for the GP regression meta-models are selected to be representative of data points that could be obtained by modern supervisory control and data acquisition (SCADA) systems to support data-driven building models. The supervisory MPC framework is capable of reducing the total energy usage with minor adjustments in thermal comfort.},
eventtitle = {2020 {{American Control Conference}} ({{ACC}})},
keywords = {Atmospheric modeling,Buildings,Cooling,Data models,Heating systems,Mathematical model,Predictive models}
}
@book{rasmussenGaussianProcessesMachine2006,
title = {Gaussian Processes for Machine Learning},
author = {Rasmussen, Carl Edward and Williams, Christopher K. I.},
date = {2006},
publisher = {{MIT Press}},
location = {{Cambridge, Mass}},
annotation = {OCLC: ocm61285753},
file = {/home/radu/Zotero/storage/VBZDYFA6/Rasmussen and Williams - 2006 - Gaussian processes for machine learning.pdf},
isbn = {978-0-262-18253-9},
keywords = {Data processing,Gaussian processes,Machine learning,Mathematical models},
langid = {english},
pagetotal = {248},
series = {Adaptive Computation and Machine Learning}
}
@article{sia180:2014ProtectionThermiqueProtection2014,
title = {Protection Thermique, Protection Contre l'humidité et Climat Intérieur Dans Les Bâtiments},
author = {{SIA 180:2014}},
date = {2014},
journaltitle = {Schweizerischer Ingenieur- und Architektenverein, Zürich, CH}
}
@misc{tensorflow2015-whitepaper,
title = {{{TensorFlow}}: {{Large}}-Scale Machine Learning on Heterogeneous Systems},
author = {Abadi, Martín and Agarwal, Ashish and Barham, Paul and Brevdo, Eugene and Chen, Zhifeng and Citro, Craig and Corrado, Greg S. and Davis, Andy and Dean, Jeffrey and Devin, Matthieu and Ghemawat, Sanjay and Goodfellow, Ian and Harp, Andrew and Irving, Geoffrey and Isard, Michael and Jia, Yangqing and Jozefowicz, Rafal and Kaiser, Lukasz and Kudlur, Manjunath and Levenberg, Josh and Mané, Dandelion and Monga, Rajat and Moore, Sherry and Murray, Derek and Olah, Chris and Schuster, Mike and Shlens, Jonathon and Steiner, Benoit and Sutskever, Ilya and Talwar, Kunal and Tucker, Paul and Vanhoucke, Vincent and Vasudevan, Vijay and Viégas, Fernanda and Vinyals, Oriol and Warden, Pete and Wattenberg, Martin and Wicke, Martin and Yu, Yuan and Zheng, Xiaoqiang},
date = {2015},
url = {https://www.tensorflow.org/}
}
@online{WhatAreTypical2018,
title = {What Are Typical {{U}}-{{Values}} on Windows and Doors? | {{Aspire Bifolds Surrey}}},
shorttitle = {What Are Typical {{U}}-{{Values}} on Windows and Doors?},
date = {2018-03-15T01:49:11+00:00},
url = {https://aspirebifolds.co.uk/2018/03/what-are-typical-u-values-on-windows-and-doors/},
urldate = {2021-06-08},
abstract = {What are the typical U-Values on windows you can expect with modern aluminium and PVCu windows? A guide to the expected U-Values on windows for the home.},
file = {/home/radu/Zotero/storage/RFA838X8/what-are-typical-u-values-on-windows-and-doors.html},
langid = {british},
organization = {{Aspire Bifolds - Home Improvements Surrey}}
}
@online{WindowsHighperformanceCommercial2020,
title = {Windows for {{High}}-Performance {{Commercial Buildings}}},
date = {2020-10-31},
url = {http://web.archive.org/web/20201031043434/https://www.commercialwindows.org/transmittance.php},
urldate = {2021-06-08},
file = {/home/radu/Zotero/storage/SRLH9ZPX/transmittance.html}
}
@article{yangUnderstandingVariationalLower,
title = {Understanding the {{Variational Lower Bound}}},
author = {Yang, Xitong},
pages = {4},
file = {/home/radu/Zotero/storage/C4GB9B44/Yang - Understanding the Variational Lower Bound.pdf},
langid = {english}
}
@online{yiSparseVariationalGaussian2021,
title = {Sparse and {{Variational Gaussian Process}} — {{What To Do When Data}} Is {{Large}}},
author = {Yi, Wei},
date = {2021-05-26T08:48:06},
url = {https://towardsdatascience.com/sparse-and-variational-gaussian-process-what-to-do-when-data-is-large-2d3959f430e7},
urldate = {2021-06-10},
abstract = {Learn how the Sparse and Variational Gaussian Process model uses inducing variables to scale to large datasets.},
file = {/home/radu/Zotero/storage/BIWB8N2V/sparse-and-variational-gaussian-process-what-to-do-when-data-is-large-2d3959f430e7.html},
langid = {english},
organization = {{Medium}}
}