Updated Notebook comments
This commit is contained in:
parent
12c879f016
commit
633c4d12d3
5 changed files with 216 additions and 2413 deletions
|
@ -4,7 +4,7 @@
|
|||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Bayesian Optimisation of starting Gaussian Process hyperparameters"
|
||||
"# Sparse and Variational Gaussian Process Model Training and Performance Evaluation"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
@ -78,9 +78,6 @@
|
|||
"cell_type": "markdown",
|
||||
"metadata": {
|
||||
"id": "90fdac33-eed4-4ab4-b2b1-de0f1f27727b",
|
||||
"jupyter": {
|
||||
"source_hidden": true
|
||||
},
|
||||
"tags": []
|
||||
},
|
||||
"source": [
|
||||
|
@ -199,7 +196,7 @@
|
|||
"id": "0aba0df5-b0e3-4738-bb61-1dad869d1ea3"
|
||||
},
|
||||
"source": [
|
||||
"## Load previously exported data"
|
||||
"## Load previously exported CARNOT 'experimental' data"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
@ -212,6 +209,13 @@
|
|||
"dfs_test = []"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Separate into training and testing data sets:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 13,
|
||||
|
@ -235,6 +239,13 @@
|
|||
" dfs_test.append(pd.read_csv(f\"../Data/Good_CARNOT/{exp}_table.csv\").rename(columns = {'Power': 'SimulatedHeat'}))"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Separate columns into exogenous inputs, controlled inputs and outputs:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 15,
|
||||
|
@ -248,6 +259,13 @@
|
|||
"y_cols = ['SimulatedTemp']"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Impose the autoregressive lags for each input group:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 16,
|
||||
|
@ -497,6 +515,13 @@
|
|||
" return df_gpr"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Merge all the training dataframes:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 25,
|
||||
|
@ -647,15 +672,6 @@
|
|||
"df_gpr_train.head()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 26,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"#df_gpr_train = df_gpr_train.sample(n = 500)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 27,
|
||||
|
@ -1245,6 +1261,13 @@
|
|||
"y_lags = 5"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Iterate over all combination of lags and compute for each the RMSE, SMSE, LPD and MSLL errors:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 54,
|
||||
|
@ -1470,6 +1493,13 @@
|
|||
"## Multistep prediction"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Select the dataset which will be used for multistep prediction:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 47,
|
||||
|
@ -1481,6 +1511,13 @@
|
|||
"df_output = dfs_gpr_test[test_dataset_idx][dict_cols['y'][1]]"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Select the starting index in the test dataset and the number of consecutive points to simulate:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 48,
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue