Exercises and associated data

The data and modelling objects created in this notebook can be downloaded directly to save computational time.


Users who wish to complete the exercises can download a small template R script. Assuming you have already downloaded the data objects above, this script will load all data objects so that the steps used to create them are not necessary to tackle the exercises.

Load libraries and time series data

This tutorial relates to content covered in Lecture 2 and Lecture 3, and relies on the following packages for manipulating data, shaping time series, fitting dynamic regression models and plotting:

library(dplyr)
library(mvgam) 
library(brms)
library(marginaleffects)
library(tidybayes)
library(ggplot2); theme_set(theme_classic())

We will continue with time series of rodent captures from the Portal Project, a long-term monitoring study based near the town of Portal, Arizona, with particular focus on the time series of captures for the Desert Pocket Mouse Chaetodipus penicillatus. Data manipulations will proceed in a similar way as in Tutorial 1, except that we now use a lagged version of ndvi (at a lag of 12 lunar months; see ?dplyr::lag for details) to give you a sense of how you can create and use lagged predictors in time series models

data("portal_data")
portal_data %>%
  
  # mvgam requires a 'time' variable be present in the data to index
  # the temporal observations. This is especially important when tracking 
  # multiple time series. In the Portal data, the 'moon' variable indexes the
  # lunar monthly timestep of the trapping sessions
  dplyr::mutate(time = moon - (min(moon)) + 1) %>%
  
  # We can also provide a more informative name for the outcome variable, which 
  # is counts of the 'PP' species (Chaetodipus penicillatus) across all control
  # plots
  dplyr::mutate(count = PP) %>%
  
  # The other requirement for mvgam is a 'series' variable, which needs to be a
  # factor variable to index which time series each row in the data belongs to.
  # Again, this is more useful when you have multiple time series in the data
  dplyr::mutate(series = as.factor('PP')) %>%
  
  # Create a lagged version of ndvi to use as a predictor in place of the 
  # contemporaneous ndvi we have been using previously. Note, the data must be 
  # in the correct order for this to work properly
  dplyr::arrange(time) %>%
  dplyr::mutate(ndvi_lag12 = dplyr::lag(ndvi, 12)) %>%
  
  # Select the variables of interest to keep in the model_data
  dplyr::select(series, time, count,
                mintemp, ndvi_lag12) -> model_data

The data now contain five variables:
series, a factor indexing which time series each observation belongs to
time, the indicator of which time step each observation belongs to
count, the response variable representing the number of captures of the species PP in each sampling observation
mintemp, the monthly average minimum temperature at each time step
ndvi_lag12, the lagged monthly average Normalized Difference Vegetation Index at each time step

Using lagged versions of predictors is a standard practice in time series analysis / forecasting. But unfortunately the software we are using cannot easily handle missing values in the predictors. Because we’ve calculated lags of ndvi, the first 12 rows of our data now have missing values for this predictor:

head(model_data, 14)
##    series time count mintemp ndvi_lag12
## 1      PP    1     0  -9.710         NA
## 2      PP    2     1  -5.924         NA
## 3      PP    3     2  -0.220         NA
## 4      PP    4    NA   1.931         NA
## 5      PP    5    10   6.568         NA
## 6      PP    6    NA  11.590         NA
## 7      PP    7    NA  14.370         NA
## 8      PP    8    16  16.520         NA
## 9      PP    9    18  12.490         NA
## 10     PP   10    12   9.210         NA
## 11     PP   11    NA   0.845         NA
## 12     PP   12     3  -7.080         NA
## 13     PP   13     2  -7.550   1.465889
## 14     PP   14    NA  -3.633   1.558507

Visualize the data as a heatmap to get a sense of where these are distributed (NAs are shown as red bars in the below plot)

image(is.na(t(model_data %>%
                dplyr::arrange(dplyr::desc(time)))), axes = F,
      col = c('grey80', 'darkred'))
axis(3, at = seq(0,1, len = NCOL(model_data)), labels = colnames(model_data))

For now we will simply omit these missing values and restart the time index at 1:

model_data %>%
  dplyr::filter(!is.na(ndvi_lag12)) %>%
  dplyr::mutate(time = time - min(time) + 1) -> model_data_trimmed
head(model_data_trimmed)

As in the previous tutorial, we also split the data into training and testing subsets for inspecting forecast behaviours. We will do the same here, but use a shorter validation period in this example

model_data_trimmed %>% 
  dplyr::filter(time <= 162) -> data_train 
model_data_trimmed %>% 
  dplyr::filter(time > 162) -> data_test

Forecasting with temporal smooths

Our last tutorial ended with a model that included a smooth function of time. The lectures have illustrated why this might be a bad idea if one of our goals is to forecast. We will produce forecasts from this model to show why this can be problematic. Start by refitting the model from Tutorial 1, including the data_test object as newdata to ensure the model automatically produces forecasts.

model0 <- mvgam(count ~ s(time, bs = 'bs', k = 15) + 
                  ndvi_lag12 + 
                  mintemp,
                family = poisson(),
                data = data_train,
                newdata = data_test)

Inspect forecasts from this model, which are not great

plot(model0, type = 'forecast', newdata = data_test)
## Out of sample DRPS:
## 310.32655475

What happens if we change k slightly by using 12 basis functions rather than 15?

model0b <- mvgam(count ~ s(time, bs = 'bs', k = 12) + 
                  ndvi_lag12 + 
                  mintemp,
                family = poisson(),
                data = data_train,
                newdata = data_test)

Forecasts from this model are, somehow, even more ridiculous!

plot(model0b, type = 'forecast', newdata = data_test)
## Out of sample CRPS:
## 7178.942387

Why is this happening? The forecasts are driven almost entirely by variation in the temporal spline, which is extrapolating linearly forever beyond the training data. Any slight wiggles near the boundary will result in wildly different forecasts. To visualize this, we can plot the extrapolated temporal functions for the two models. Here for the first model, with 15 basis functions:

plot_predictions(model0, by = 'time',
                 newdata = datagrid(time = 1:max(data_test$time)),
                 type = 'link') +
  geom_vline(xintercept = max(data_train$time), 
             linetype = 'dashed')

And for the second model, with fewer basis functions (12). The difference is clear, with this model reacting to a slightly upward shift in the smooth toward the boundary that results in predictions heading to infinity:

plot_predictions(model0b, by = 'time',
                 newdata = datagrid(time = 1:max(data_test$time)),
                 type = 'link') +
  geom_vline(xintercept = max(data_train$time), 
             linetype = 'dashed')

Try changing the value of k and seeing how forecasts change. This should give you more confidence that extrapolating from penalized smooths can be unpredictable and difficult to control. There are some strategies to help reign in this behaviour (once again this advice comes via a blogpost by Gavin Simpson), but results will still be unpredictable and sensitive to choices such as k or the placement of knots. But if we shouldn’t use smooth functions of time for forecasting, what other options do we have?

Exercises

  1. Using advice from Gavin Simpson’s blog post above, try changing the order of the penalty in the temporal smooth to see how the resulting predictions change (use ?b.spline for guidance)
  2. Try using a cubic regression spline in place of the b-spline and inspect the resulting predictions (use bs = 'cr')
Check here for template code if you’re having trouble changing the penalty order in the b-spline
# Replace the ? with the correct value(s)
# Using ?b.spline you will see examples in the Description and Details sections about how these splines can handle multiple penalties
model0 <- mvgam(count ~ s(time, bs = 'bs', k = 15,
                          m = c(?)) + 
                  ndvi_lag12 + 
                  mintemp,
                family = poisson(),
                data = data_train,
                newdata = data_test)

Residual correlation structures

One useful strategy to deal with this is to fit models that incorporate autocorrelated residual processes within the GAM. If our model’s residuals are autocorrelated, our estimates of uncertainty around other regression parameters (such as the \(\beta\) coefficients) can be biased and misleading. Failing to address this may lead to false inferences and perhaps to poor forecasts. Fortunately there are ways we can tackle this by modelling the autocorrelation in our model’s errors. This can be done using mgcv functionality, as the blog post above explains. But we will use brms here as it is more flexible and provides an interface that is more consistent with mvgam.

A standard Poisson GAM in brms

First, we will fit a similar model to those used in Tutorial 1 so to see how brms works. The interface is very familiar, relying on the usual R formula syntax (see ?brmsformula for details and examples). But there are some key differences between brms and mvgam. For example, brms can handle a much larger diversity of response distributions (see ?brmsfamily for details). But for now, we will stick a similar Poisson model as previously (a nonlinear function of ndvi_lag12 and a parametric, linear function of mintemp):

model1 <- brm(count ~ 
                s(ndvi_lag12, k = 9) +
                mintemp,
              family = poisson(),
              # set a mildly informative prior on beta coefficients
              prior = set_prior("normal(0, 2)", class = "b"),
              data = data_train,
              backend = 'cmdstanr',
              iter = 1000,
              cores = 4)

Use summary() to ensure no major diagnostic warnings have been produced and inspect posterior distributions for key parameters. You may notice some warnings of divergent transitions for this model, which should give you some immediate concern about the validity of the model’s inferences

summary(model1)
## Warning: Parts of the model have not converged (some Rhats are > 1.05). Be
## careful when analysing the results! We recommend running more iterations and/or
## setting stronger priors.
## Warning: There were 118 divergent transitions after warmup. Increasing
## adapt_delta above 0.8 may help. See
## http://mc-stan.org/misc/warnings.html#divergent-transitions-after-warmup
##  Family: poisson 
##   Links: mu = log 
## Formula: count ~ s(ndvi_lag12, k = 9) + mintemp 
##    Data: data_train (Number of observations: 134) 
##   Draws: 4 chains, each with iter = 1000; warmup = 500; thin = 1;
##          total post-warmup draws = 2000
## 
## Smoothing Spline Hyperparameters:
##                    Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## sds(sndvi_lag12_1)     0.35      0.36     0.02     1.50 1.08       40       34
## 
## Regression Coefficients:
##               Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## Intercept         2.39      0.03     2.33     2.45 1.01      920      931
## mintemp           0.07      0.00     0.06     0.07 1.01      860      872
## sndvi_lag12_1    -0.22      0.50    -1.67     0.80 1.06       47       24
## 
## Draws were sampled using sample(hmc). For each parameter, Bulk_ESS
## and Tail_ESS are effective sample size measures, and Rhat is the potential
## scale reduction factor on split chains (at convergence, Rhat = 1).

For any brms model, we can inspect the underlying Stan code using the stancode function:

stancode(model1)
Check here to see the model’s Stan code
## // generated with brms 2.21.0
## functions {
## }
## data {
##   int<lower=1> N;  // total number of observations
##   array[N] int Y;  // response variable
##   int<lower=1> K;  // number of population-level effects
##   matrix[N, K] X;  // population-level design matrix
##   int<lower=1> Kc;  // number of population-level effects after centering
##   // data for splines
##   int Ks;  // number of linear effects
##   matrix[N, Ks] Xs;  // design matrix for the linear effects
##   // data for spline 1
##   int nb_1;  // number of bases
##   array[nb_1] int knots_1;  // number of knots
##   // basis function matrices
##   matrix[N, knots_1[1]] Zs_1_1;
##   int prior_only;  // should the likelihood be ignored?
## }
## transformed data {
##   matrix[N, Kc] Xc;  // centered version of X without an intercept
##   vector[Kc] means_X;  // column means of X before centering
##   for (i in 2:K) {
##     means_X[i - 1] = mean(X[, i]);
##     Xc[, i - 1] = X[, i] - means_X[i - 1];
##   }
## }
## parameters {
##   vector[Kc] b;  // regression coefficients
##   real Intercept;  // temporary intercept for centered predictors
##   vector[Ks] bs;  // unpenalized spline coefficients
##   // parameters for spline 1
##   // standardized penalized spline coefficients
##   vector[knots_1[1]] zs_1_1;
##   vector<lower=0>[nb_1] sds_1;  // SDs of penalized spline coefficients
## }
## transformed parameters {
##   // penalized spline coefficients
##   vector[knots_1[1]] s_1_1;
##   real lprior = 0;  // prior contributions to the log posterior
##   // compute penalized spline coefficients
##   s_1_1 = sds_1[1] * zs_1_1;
##   lprior += normal_lpdf(b | 0, 2);
##   lprior += student_t_lpdf(Intercept | 3, 2.6, 2.5);
##   lprior += normal_lpdf(bs | 0, 2);
##   lprior += student_t_lpdf(sds_1 | 3, 0, 2.5)
##     - 1 * student_t_lccdf(0 | 3, 0, 2.5);
## }
## model {
##   // likelihood including constants
##   if (!prior_only) {
##     // initialize linear predictor term
##     vector[N] mu = rep_vector(0.0, N);
##     mu += Intercept + Xs * bs + Zs_1_1 * s_1_1;
##     target += poisson_log_glm_lpmf(Y | Xc, mu, b);
##   }
##   // priors including constants
##   target += lprior;
##   target += std_normal_lpdf(zs_1_1);
## }
## generated quantities {
##   // actual population-level intercept
##   real b_Intercept = Intercept - dot_product(means_X, b);
## }


Of course we still cannot interpret the smooth effects using the posterior summary. We will need to make targeted plots to do that. Fortunately brms has a similar conditional_effects() function to mvgam so this is simple to do:

plot(conditional_effects(model1),
     theme = theme_classic(),
     mean = FALSE,
     rug = TRUE, 
     ask = FALSE)

Computing residuals with tidybayes

A notable difference between brms and mvgam is that the former does not automatically return predictions. These need to be computed after model fitting, which means that residuals also have to be computed. To inspect residual diagnostics we can enlist the help of the tidybayes package to produce randomized quantile residuals. More information on this procedure can be found in this tidybayes vignette. Do not worry if it is difficult to understand, it is not of major relevance to the learning objectives of the course (and if you use mvgam, this will be done for you automatically anyway)

data_train %>%
  # calculate posterior predictions
  add_predicted_draws(model1) %>%
  # use predictions to compute randomized quantile residuals
  summarise(
    p_lower = mean(.prediction < count),
    p_upper = mean(.prediction <= count),
    p_residual = runif(1, 
                       max(p_lower, 0.00001), 
                       min(p_upper, 0.99999)),
    z_residual = qnorm(p_residual),
    .groups = "drop_last"
  ) -> residual_data

Perform the usual residual diagnostics. For example, below are residual ACF, pACF and Q-Q plots, all of which show that the model’s predictions do not adequately match the features of the observed data

acf(residual_data$z_residual, 
    na.action = na.pass, 
    bty = 'l',
    lwd = 2,
    ci.col = 'darkred',
    main = 'Dunn-Smyth residuals')

pacf(residual_data$z_residual, 
    na.action = na.pass, 
    bty = 'l',
    lwd = 2,
    ci.col = 'darkred',
    main = 'Dunn-Smyth residuals')

ggplot(residual_data, aes(sample = z_residual)) +
  geom_qq() +
  geom_abline(col = 'darkred') +
  theme_classic()
## Warning: Removed 56 rows containing non-finite outside the scale range
## (`stat_qq()`).

Autocorrelated error GAM in brms

This model is not doing well. Clearly we need to account for temporal autocorrelation without using a smooth of time. Now onto another great feature of brms: the ability to include (possibly latent) autocorrelated residuals with the ar() function (see ?brms::ar for details). This model will use a separate sub-model for latent residuals that evolve as an AR1 process (i.e. the error in the current time point is a function of the error in the previous time point, plus stochastic noise). When using non-Gaussian observations, we can only select an order of 1 for the AR component (higher orders are only currently allowed for Gaussian observations). Note, this model takes much longer to fit than the previous model because it estimates a full temporal covariance for the residuals

model2 <- brm(count ~ 
                s(ndvi_lag12, k = 9) +
                mintemp +
                ar(p = 1, 
                   time = time,
                   cov = TRUE),
              family = poisson(),
              prior = c(set_prior("normal(0, 2)", class = "b")),
              data = data_train,
              backend = 'cmdstanr',
              iter = 1000,
              cores = 4)
Check here to see a mathematical description of the model

The model can be described mathematically as follows: \[\begin{align*} \boldsymbol{count}_t & \sim \text{Poisson}(\lambda_t) \\ log(\lambda_t) & = f(\boldsymbol{ndvi})_t + \beta_{mintemp} * \boldsymbol{mintemp}_t + z_t \\ z_t & \sim \text{Normal}(ar1 * z_{t-1}, \sigma_{error}) \\ ar1 & \sim \text{Normal}(0, 1)[-1, 1] \\ \sigma_{error} & \sim \text{Exponential}(2) \\ f(\boldsymbol{ndvi}) & = \sum_{k=1}^{K}b * \beta_{smooth} \\ \beta_{smooth} & \sim \text{MVNormal}(0, (\Omega * \lambda)^{-1}) \\ \beta_{mintemp} & \sim \text{Normal}(0, 1) \end{align*}\]

Here the term \(z_t\) captures autocorrelated latent residuals, which are modelled using an AR1 process. In brms, the AR coefficients are restricted to the interval \([-1,1]\) to ensure the dynamic process is stationary.


View the model’s summary, and notice how there are now summaries for the residual autoregressive parameters as well

summary(model2)
## Warning: There were 15 divergent transitions after warmup. Increasing
## adapt_delta above 0.8 may help. See
## http://mc-stan.org/misc/warnings.html#divergent-transitions-after-warmup
##  Family: poisson 
##   Links: mu = log 
## Formula: count ~ s(ndvi_lag12, k = 9) + mintemp + ar(p = 1, time = time, cov = TRUE) 
##    Data: data_train (Number of observations: 134) 
##   Draws: 4 chains, each with iter = 1000; warmup = 500; thin = 1;
##          total post-warmup draws = 2000
## 
## Smoothing Spline Hyperparameters:
##                    Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## sds(sndvi_lag12_1)     0.47      0.43     0.01     1.62 1.00      872      871
## 
## Correlation Structures:
##       Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## ar[1]     0.80      0.06     0.67     0.91 1.00      626      845
## sderr     0.65      0.07     0.53     0.79 1.00      773     1236
## 
## Regression Coefficients:
##               Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## Intercept         1.85      0.33     1.22     2.48 1.01      785      857
## mintemp           0.09      0.01     0.06     0.11 1.00      872     1031
## sndvi_lag12_1     0.23      0.89    -1.65     1.99 1.00     1302     1240
## 
## Draws were sampled using sample(hmc). For each parameter, Bulk_ESS
## and Tail_ESS are effective sample size measures, and Rhat is the potential
## scale reduction factor on split chains (at convergence, Rhat = 1).

Again you can inspect the Stan code if you’d like to see how the autocorrelation structure is included in the linear predictor

stancode(model2)
Check here to see the model’s Stan code
## // generated with brms 2.21.0
## functions {
##   /* integer sequence of values
##    * Args:
##    *   start: starting integer
##    *   end: ending integer
##    * Returns:
##    *   an integer sequence from start to end
##    */
##   array[] int sequence(int start, int end) {
##     array[end - start + 1] int seq;
##     for (n in 1:num_elements(seq)) {
##       seq[n] = n + start - 1;
##     }
##     return seq;
##   }
##   // are two 1D integer arrays equal?
##   int is_equal(array[] int a, array[] int b) {
##     int n_a = size(a);
##     int n_b = size(b);
##     if (n_a != n_b) {
##       return 0;
##     }
##     for (i in 1:n_a) {
##       if (a[i] != b[i]) {
##         return 0;
##       }
##     }
##     return 1;
##   }
## 
##   /* grouped data stored linearly in "data" as indexed by begin and end
##    * is repacked to be stacked into an array of vectors.
##    */
##   array[] vector stack_vectors(vector long_data, int n, array[] int stack,
##                                array[] int begin, array[] int end) {
##     int S = sum(stack);
##     int G = size(stack);
##     array[S] vector[n] stacked;
##     int j = 1;
##     for (i in 1:G) {
##       if (stack[i] == 1) {
##         stacked[j] = long_data[begin[i]:end[i]];
##         j += 1;
##       }
##     }
##     return stacked;
##   }
##   /* scale and correlate time-series residuals
##    * using the Cholesky factor of the correlation matrix
##    * Args:
##    *   zerr: standardized and independent residuals
##    *   sderr: standard deviation of the residuals
##    *   chol_cor: cholesky factor of the correlation matrix
##    *   nobs: number of observations in each group
##    *   begin: the first observation in each group
##    *   end: the last observation in each group
##    * Returns:
##    *   vector of scaled and correlated residuals
##    */
##    vector scale_time_err(vector zerr, real sderr, matrix chol_cor,
##                          array[] int nobs, array[] int begin, array[] int end) {
##      vector[rows(zerr)] err;
##      for (i in 1:size(nobs)) {
##        matrix[nobs[i], nobs[i]] L_i;
##        L_i = sderr * chol_cor[1:nobs[i], 1:nobs[i]];
##        err[begin[i]:end[i]] = L_i * zerr[begin[i]:end[i]];
##      }
##      return err;
##    }
##   /* scale and correlate time-series residuals
##    * allows for flexible correlation matrix subsets
##    * Deviating Args:
##    *   Jtime: array of time indices per group
##    * Returns:
##    *   vector of scaled and correlated residuals
##    */
##    vector scale_time_err_flex(vector zerr, real sderr, matrix chol_cor,
##                               array[] int nobs, array[] int begin, array[] int end, array[,] int Jtime) {
##      vector[rows(zerr)] err;
##      int I = size(nobs);
##      array[I] int has_err = rep_array(0, I);
##      int i = 1;
##      matrix[rows(chol_cor), cols(chol_cor)] L;
##      matrix[rows(chol_cor), cols(chol_cor)] Cov;
##      L = sderr * chol_cor;
##      Cov = multiply_lower_tri_self_transpose(L);
##      while (i <= I) {
##        array[nobs[i]] int iobs = Jtime[i, 1:nobs[i]];
##        matrix[nobs[i], nobs[i]] L_i;
##        if (is_equal(iobs, sequence(1, rows(L)))) {
##          // all timepoints are present in this group
##          L_i = L;
##        } else {
##          // arbitrary subsets cannot be taken on L directly
##          L_i = cholesky_decompose(Cov[iobs, iobs]);
##        }
##        err[begin[i]:end[i]] = L_i * zerr[begin[i]:end[i]];
##        has_err[i] = 1;
##        // find all additional groups where we have the same timepoints
##        for (j in (i+1):I) {
##          if (has_err[j] == 0 && is_equal(Jtime[j], Jtime[i]) == 1) {
##            err[begin[j]:end[j]] = L_i * zerr[begin[j]:end[j]];
##            has_err[j] = 1;
##          }
##        }
##        while (i <= I && has_err[i] == 1) {
##          i += 1;
##        }
##     }
##     return err;
##   }
##   /* compute the cholesky factor of an AR1 correlation matrix
##    * Args:
##    *   ar: AR1 autocorrelation
##    *   nrows: number of rows of the covariance matrix
##    * Returns:
##    *   A nrows x nrows matrix
##    */
##    matrix cholesky_cor_ar1(real ar, int nrows) {
##      matrix[nrows, nrows] mat;
##      vector[nrows - 1] gamma;
##      mat = diag_matrix(rep_vector(1, nrows));
##      for (i in 2:nrows) {
##        gamma[i - 1] = pow(ar, i - 1);
##        for (j in 1:(i - 1)) {
##          mat[i, j] = gamma[i - j];
##          mat[j, i] = gamma[i - j];
##        }
##      }
##      return cholesky_decompose(mat ./ (1 - ar^2));
##    }
##   /* compute the cholesky factor of a MA1 correlation matrix
##    * Args:
##    *   ma: MA1 autocorrelation
##    *   nrows: number of rows of the covariance matrix
##    * Returns:
##    *   A nrows x nrows MA1 covariance matrix
##    */
##    matrix cholesky_cor_ma1(real ma, int nrows) {
##      matrix[nrows, nrows] mat;
##      mat = diag_matrix(rep_vector(1 + ma^2, nrows));
##      if (nrows > 1) {
##        mat[1, 2] = ma;
##        for (i in 2:(nrows - 1)) {
##          mat[i, i - 1] = ma;
##          mat[i, i + 1] = ma;
##        }
##        mat[nrows, nrows - 1] = ma;
##      }
##      return cholesky_decompose(mat);
##    }
##   /* compute the cholesky factor of an ARMA1 correlation matrix
##    * Args:
##    *   ar: AR1 autocorrelation
##    *   ma: MA1 autocorrelation
##    *   nrows: number of rows of the covariance matrix
##    * Returns:
##    *   A nrows x nrows matrix
##    */
##    matrix cholesky_cor_arma1(real ar, real ma, int nrows) {
##      matrix[nrows, nrows] mat;
##      vector[nrows] gamma;
##      mat = diag_matrix(rep_vector(1 + ma^2 + 2 * ar * ma, nrows));
##      gamma[1] = (1 + ar * ma) * (ar + ma);
##      for (i in 2:nrows) {
##        gamma[i] = gamma[1] * pow(ar, i - 1);
##        for (j in 1:(i - 1)) {
##          mat[i, j] = gamma[i - j];
##          mat[j, i] = gamma[i - j];
##        }
##      }
##      return cholesky_decompose(mat ./ (1 - ar^2));
##    }
## }
## data {
##   int<lower=1> N;  // total number of observations
##   array[N] int Y;  // response variable
##   int<lower=1> K;  // number of population-level effects
##   matrix[N, K] X;  // population-level design matrix
##   int<lower=1> Kc;  // number of population-level effects after centering
##   // data for splines
##   int Ks;  // number of linear effects
##   matrix[N, Ks] Xs;  // design matrix for the linear effects
##   // data for spline 1
##   int nb_1;  // number of bases
##   array[nb_1] int knots_1;  // number of knots
##   // basis function matrices
##   matrix[N, knots_1[1]] Zs_1_1;
##   // data needed for ARMA correlations
##   int<lower=0> Kar;  // AR order
##   int<lower=0> Kma;  // MA order
##   // see the functions block for details
##   int<lower=1> N_tg;
##   array[N_tg] int<lower=1> begin_tg;
##   array[N_tg] int<lower=1> end_tg;
##   array[N_tg] int<lower=1> nobs_tg;
##   int prior_only;  // should the likelihood be ignored?
## }
## transformed data {
##   matrix[N, Kc] Xc;  // centered version of X without an intercept
##   vector[Kc] means_X;  // column means of X before centering
##   int max_lag = max(Kar, Kma);
##   int max_nobs_tg = max(nobs_tg);  // maximum dimension of the autocorrelation matrix
##   for (i in 2:K) {
##     means_X[i - 1] = mean(X[, i]);
##     Xc[, i - 1] = X[, i] - means_X[i - 1];
##   }
## }
## parameters {
##   vector[Kc] b;  // regression coefficients
##   real Intercept;  // temporary intercept for centered predictors
##   vector[Ks] bs;  // unpenalized spline coefficients
##   // parameters for spline 1
##   // standardized penalized spline coefficients
##   vector[knots_1[1]] zs_1_1;
##   vector<lower=0>[nb_1] sds_1;  // SDs of penalized spline coefficients
##   vector[N] zerr;  // unscaled residuals
##   real<lower=0> sderr;  // SD of residuals
##   vector<lower=-1,upper=1>[Kar] ar;  // autoregressive coefficients
## }
## transformed parameters {
##   // penalized spline coefficients
##   vector[knots_1[1]] s_1_1;
##   vector[N] err;  // actual residuals
##   // cholesky factor of the autocorrelation matrix
##   matrix[max_nobs_tg, max_nobs_tg] Lcortime;
##   real lprior = 0;  // prior contributions to the log posterior
##   // compute residual covariance matrix
##   Lcortime = cholesky_cor_ar1(ar[1], max_nobs_tg);
##   // compute correlated time-series residuals
##   err = scale_time_err(zerr, sderr, Lcortime, nobs_tg, begin_tg, end_tg);
##   // compute penalized spline coefficients
##   s_1_1 = sds_1[1] * zs_1_1;
##   lprior += normal_lpdf(b | 0, 2);
##   lprior += student_t_lpdf(Intercept | 3, 2.6, 2.5);
##   lprior += normal_lpdf(bs | 0, 2);
##   lprior += student_t_lpdf(sds_1 | 3, 0, 2.5)
##     - 1 * student_t_lccdf(0 | 3, 0, 2.5);
##   lprior += student_t_lpdf(sderr | 3, 0, 2.5)
##     - 1 * student_t_lccdf(0 | 3, 0, 2.5);
## }
## model {
##   // likelihood including constants
##   if (!prior_only) {
##     // initialize linear predictor term
##     vector[N] mu = rep_vector(0.0, N);
##     mu += Intercept + Xc * b + Xs * bs + Zs_1_1 * s_1_1 + err;
##     target += poisson_log_lpmf(Y | mu);
##   }
##   // priors including constants
##   target += lprior;
##   target += std_normal_lpdf(zs_1_1);
##   target += std_normal_lpdf(zerr);
## }
## generated quantities {
##   // actual population-level intercept
##   real b_Intercept = Intercept - dot_product(means_X, b);
## }


Autocorrelated error GAM in mvgam

The brms model is estimating autocorrelated errors for the full time period, even though some of these time points have missing observations. This is useful for getting more realistic estimates of the residual autocorrelation parameters. In contrast, if you were to attempt an AR error process in mgcv (which you can do using gamm() or bam()), the time points with missing data would be thrown out before constructing the model. This will cause problems, sometimes small but sometimes very big, for your inferences.

mvgam can also fit the latent trend AR1 model, though it is considerably faster. Let’s fit it (note that only 250 posterior samples per chain are taken to reduce the size of the model, which will make it easier to download for completing the exercises):

model3 <- mvgam(count ~ 
                  s(ndvi_lag12, k = 9) +
                  mintemp,
                family = poisson(),
                data = data_train,
                newdata = data_test,
                priors = prior(normal(0, 2), class = b),
                samples = 250,
                trend_model = AR(p = 1))

Inspect the Stan code for this model to see how it differs from the brms version

code(model3)
Check here to see the model’s Stan code
## // Stan model code generated by package mvgam
## data {
##   int<lower=0> total_obs; // total number of observations
##   int<lower=0> n; // number of timepoints per series
##   int<lower=0> n_sp; // number of smoothing parameters
##   int<lower=0> n_series; // number of series
##   int<lower=0> num_basis; // total number of basis coefficients
##   vector[num_basis] zero; // prior locations for basis coefficients
##   matrix[total_obs, num_basis] X; // mgcv GAM design matrix
##   array[n, n_series] int<lower=0> ytimes; // time-ordered matrix (which col in X belongs to each [time, series] observation?)
##   matrix[8, 16] S1; // mgcv smooth penalty matrix S1
##   int<lower=0> n_nonmissing; // number of nonmissing observations
##   array[n_nonmissing] int<lower=0> flat_ys; // flattened nonmissing observations
##   matrix[n_nonmissing, num_basis] flat_xs; // X values for nonmissing observations
##   array[n_nonmissing] int<lower=0> obs_ind; // indices of nonmissing observations
## }
## parameters {
##   // raw basis coefficients
##   vector[num_basis] b_raw;
##   
##   // latent trend AR1 terms
##   vector<lower=-1.5, upper=1.5>[n_series] ar1;
##   
##   // latent trend variance parameters
##   vector<lower=0>[n_series] sigma;
##   
##   // latent trends
##   matrix[n, n_series] trend;
##   
##   // smoothing parameters
##   vector<lower=0>[n_sp] lambda;
## }
## transformed parameters {
##   // basis coefficients
##   vector[num_basis] b;
##   b[1 : num_basis] = b_raw[1 : num_basis];
## }
## model {
##   // prior for (Intercept)...
##   b_raw[1] ~ student_t(3, 2.6, 2.5);
##   
##   // prior for mintemp...
##   b_raw[2] ~ normal(0, 2);
##   
##   // prior for s(ndvi_lag12)...
##   b_raw[3 : 10] ~ multi_normal_prec(zero[3 : 10],
##                                     S1[1 : 8, 1 : 8] * lambda[1]
##                                     + S1[1 : 8, 9 : 16] * lambda[2]);
##   
##   // priors for AR parameters
##   ar1 ~ std_normal();
##   
##   // priors for smoothing parameters
##   lambda ~ normal(5, 30);
##   
##   // priors for latent trend variance parameters
##   sigma ~ student_t(3, 0, 2.5);
##   
##   // trend estimates
##   trend[1, 1 : n_series] ~ normal(0, sigma);
##   for (s in 1 : n_series) {
##     trend[2 : n, s] ~ normal(ar1[s] * trend[1 : (n - 1), s], sigma[s]);
##   }
##   {
##     // likelihood functions
##     vector[n_nonmissing] flat_trends;
##     flat_trends = to_vector(trend)[obs_ind];
##     flat_ys ~ poisson_log_glm(append_col(flat_xs, flat_trends), 0.0,
##                               append_row(b, 1.0));
##   }
## }
## generated quantities {
##   vector[total_obs] eta;
##   matrix[n, n_series] mus;
##   vector[n_sp] rho;
##   vector[n_series] tau;
##   array[n, n_series] int ypred;
##   rho = log(lambda);
##   for (s in 1 : n_series) {
##     tau[s] = pow(sigma[s], -2.0);
##   }
##   
##   // posterior predictions
##   eta = X * b;
##   for (s in 1 : n_series) {
##     mus[1 : n, s] = eta[ytimes[1 : n, s]] + trend[1 : n, s];
##     ypred[1 : n, s] = poisson_log_rng(mus[1 : n, s]);
##   }
## }


Check the summary:

summary(model3)
## GAM formula:
## count ~ s(ndvi_lag12, k = 9) + mintemp
## 
## Family:
## poisson
## 
## Link function:
## log
## 
## Trend model:
## AR1
## 
## N series:
## 1 
## 
## N timepoints:
## 162 
## 
## Status:
## Fitted using Stan 
## 4 chains, each with iter = 750; warmup = 500; thin = 1 
## Total post-warmup draws = 1000
## 
## 
## GAM coefficient (beta) estimates:
##                   2.5%     50% 97.5% Rhat n_eff
## (Intercept)      1.500  1.8000 2.200 1.05    51
## mintemp          0.063  0.0900 0.110 1.02   136
## s(ndvi_lag12).1 -0.110 -0.0083 0.094 1.00   659
## s(ndvi_lag12).2 -0.220  0.0018 0.200 1.02   258
## s(ndvi_lag12).3 -0.051 -0.0018 0.042 1.01   268
## s(ndvi_lag12).4 -0.120  0.0060 0.110 1.02   265
## s(ndvi_lag12).5 -0.054  0.0027 0.057 1.02   274
## s(ndvi_lag12).6 -0.095  0.0071 0.100 1.02   260
## s(ndvi_lag12).7 -0.580  0.0560 0.630 1.02   263
## s(ndvi_lag12).8 -0.140  0.0450 0.240 1.03   299
## 
## Approximate significance of GAM smooths:
##                edf Ref.df Chi.sq p-value
## s(ndvi_lag12) 2.29      8   10.4    0.86
## 
## Latent trend parameter AR estimates:
##          2.5%  50% 97.5% Rhat n_eff
## ar1[1]   0.67 0.80  0.91 1.01   382
## sigma[1] 0.51 0.62  0.76 1.02   243
## 
## Stan MCMC diagnostics:
## n_eff / iter looks reasonable for all parameters
## Rhats above 1.05 found for 1 parameters
##  *Diagnose further to investigate why the chains have not mixed
## 0 of 1000 iterations ended with a divergence (0%)
## 0 of 1000 iterations saturated the maximum tree depth of 12 (0%)
## E-FMI indicated no pathological behavior
## 
## Samples were drawn using NUTS(diag_e) at Wed Mar 27 3:40:50 PM 2024.
## For each parameter, n_eff is a crude measure of effective sample size,
## and Rhat is the potential scale reduction factor on split MCMC chains
## (at convergence, Rhat = 1)

Comparing inferences

We can compare inferences from the two models to see that they are similar (although in a different order):

plot(conditional_effects(model2), ask = FALSE)

conditional_effects(model3)
## Warning: Removed 28 rows containing missing values or values outside the scale range
## (`geom_point()`).

## Warning: Removed 28 rows containing missing values or values outside the scale range
## (`geom_point()`).

Now compare estimates for the residual AR1 parameters. First, extract posterior draws for the AR1 parameter from the two models, which can be done using the as.data.frame:

ar1_ests <- rbind(data.frame(
  ar1 = as.data.frame(model3, variable = 'ar1[1]')$`ar1[1]`,
  model = 'mvgam'),
  data.frame(
    ar1 = as.data.frame(model2, variable = 'ar[1]')$`ar[1]`,
    model = 'brms'))

We can use ggplot2 to compare these estimates, which shows that they are also very similar:

ggplot(data = ar1_ests, aes(x = ar1, fill = model)) +
  geom_histogram(aes(y = after_stat(density)), 
                 bins = 40, col = 'white') +
  facet_grid(rows = 'model') +
  theme_bw() +
  theme(legend.position = 'none') +
  scale_fill_manual(values = c('darkblue', 'darkred')) +
  labs(x = expression(ar1[error]), y = 'Frequency') +
  xlim(c(0,1))
## Warning: Removed 4 rows containing missing values or values outside the scale range
## (`geom_bar()`).

AR processes and stationarity

As you may remember from the lectures, the larger this AR1 parameter becomes (in absolute value), the more a time series tends toward nonstationarity. We can look at some example functions to investigate this behaviour in more detail here

Click here if you would like to see the R code used to produce these simulations
# Define a function to simulate AR1 processes with a fixed error variance
simulate_ar1 = function(ar1 = 0.5, N = 50){
  # simulate the initial value of the series
  init <- rnorm(1, mean = 0, sd = 0.25)
  
  # create an empty vector to store the time series values
  states <- vector(length = N)
  
  # set the first value of the states as the initial value
  states[1] <- init
  
  # loop over remaining time points and fill in the AR1 process
  for(t in 2:N){
    states[t] <- rnorm(1, mean = ar1 * states[t - 1],
                       sd = 0.25)
  }
  
  return(states)
}

# plot a single AR1 series with an AR1 parameter = 0.5
plot(x = 1:50, y = simulate_ar1(ar1 = 0.5), 
     type = 'l',
     lwd = 2, col = 'darkred',
     ylab = expression(f(Time)),
     xlab = 'Time', bty = 'l',
     ylim = c(-2,2),
     main = expression(AR[1]~'='~0.5))

# overlay an additional 20 possible series using different colours
for(i in 1:20){
  lines(simulate_ar1(ar1 = 0.5),
        col = sample(c("#DCBCBC",
                       "#C79999",
                       "#B97C7C",
                       "#A25050",
                       "#7C0000"), 1),
        lwd = 2)
}
box(bty = 'l', lwd = 2)


This plot shows how an AR1 process with a smallish AR1 parameter (0.5) approaches stationarity fairly quickly. What happens if we increase this parameter toward the types of values that our models have estimated? The code to produce these simulations is the same as above, except the AR1 parameter was changed to 0.95

The variance of this particular series grows into the future for a much longer time than did the series we simulated above. This is useful for getting a sense about the relative stability of the underlying dynamic process. We can enforce a Random Walk so that the dynamic process will not be stationary (note, a Random Walk trend can only be fit in mvgam; there is currently no way of using such a process in brms, that I’m aware of):

model3b <- mvgam(count ~ 
                  s(ndvi_lag12, k = 9) +
                  mintemp,
                family = poisson(),
                data = data_train,
                newdata = data_test,
                priors = prior(normal(0, 2), class = b),
                trend_model = RW())
Check here to see a mathematical description of the model

The model can be described mathematically as follows: \[\begin{align*} \boldsymbol{count}_t & \sim \text{Poisson}(\lambda_t) \\ log(\lambda_t) & = f(\boldsymbol{ndvi})_t + \beta_{mintemp} * \boldsymbol{mintemp}_t + z_t \\ z_t & \sim \text{Normal}(z_{t-1}, \sigma_{error}) \\ \sigma_{error} & \sim \text{Exponential}(2) \\ f(\boldsymbol{ndvi}) & = \sum_{k=1}^{K}b * \beta_{smooth} \\ \beta_{smooth} & \sim \text{MVNormal}(0, (\Omega * \lambda)^{-1}) \\ \beta_{mintemp} & \sim \text{Normal}(0, 1) \end{align*}\]

Note the similarity to model3 above. The only difference is that we no longer estimate the AR1 parameter, so the mean of the latent process at time \(t\) is simply the value of the latent process at time \(t-1\).


Plotting the trend estimates for the AR1 and Random Walk models gives you further insight into the different assumptions they make about the temporal evolution of the system:

layout(matrix(1:2, nrow = 2))
plot(model3, type = 'trend', newdata = data_test, main = 'AR1 trend')
plot(model3b, type = 'trend', newdata = data_test, main = 'RW trend')

Exercises

  1. Compare estimates for the latent residual error terms (\(\sigma_{error}\)) from model2 and model3. In mvgam, this parameter is called sigma[1], while in brms, it is called sderr
  2. Compare estimates for the parametric effect of minimum temperature (\(\beta_{mintemp}\)) from model2 and model3. In mvgam, this parameter is called mintemp, while in brms, it is called b_mintemp
  3. Look at the Dunn-Smyth residuals for model3 and provide a few comments describing what you see (use plot.mvgam() with type = residuals). Does this model seem to capture the relevant features of the autocorrelated observed data?
  4. Inspect posterior hindcasts and forecasts from model3 using the steps we carried out in Tutorial 1

Gaussian Processes

Another way to capture dynamic errors in ecological time series is with a Gaussian Process:

A Gaussian process defines a probability distribution over functions; in other words every sample from a Gaussian Process is an entire function from the covariate space X to the real-valued output space.” (Betancourt; Robust Gaussian Process Modeling)

In many time series analyses, we believe the power of past observations to explain current data is a function of how long ago we observed those past observations. Gaussian Processes (GPs) use covariance functions (often called ‘kernels’) that allow this to happen by specifying how correlations depend on the difference, in time, between observations. For our purposes we will rely on the squared exponential kernel, which depends on \(\rho\) (often called the length scale parameter) to control how quickly correlations decay as a function of time. The functions also depend on a parameter \(\alpha\), which controls how much the GP term contributes to the linear predictor. Below we simulate from such a GP to get an idea of the kinds of functional shapes that can be produced as you vary the length scale \(\rho\):

Click here if you would like to see the R code used to produce these simulations
# set a seed for reproducibility
set.seed(2222)

# simulate from a fairly large length-scale parameter
rho <- 10

# set the alpha parameter to 1 for all simulations
alpha <- 1

# simulate functions that span 50 time points
times <- 1:50

# generate a sequence of prediction values
draw_seq <- seq(min(times), max(times), length.out = 100)

# construct the temporal covariance matrix
Sigma <- alpha^2 * 
  exp(-0.5 * ((outer(draw_seq, draw_seq, "-") / rho) ^ 2)) +
  diag(1e-12, length(draw_seq))

# draw a single realization from the GP distribution
gp_vals <- MASS::mvrnorm(n = 1, 
                         mu = rep(0, length(draw_seq)),
                         Sigma = Sigma)

# plot the single GP draw
plot(x = draw_seq, y = gp_vals, type = 'l',
     lwd = 2, col = 'darkred',
     ylab = expression(f(Time)),
     xlab = 'Time', bty = 'l',
     ylim = c(-3,3),
     main = expression(alpha~'='~1*'; '~rho~'='~10))
# overlay an additional 10 possible functions using different colours
for(i in 1:10){
  draw <- MASS::mvrnorm(n = 1, mu = rep(0, length(draw_seq)),
                        Sigma = Sigma)
  lines(x = draw_seq, y = draw, lwd = 3.5, col = 'white')
  lines(x = draw_seq, y = draw,
        col = sample(c("#DCBCBC",
                       "#C79999",
                       "#B97C7C",
                       "#A25050",
                       "#7C0000"), 1),
        lwd = 2.5)
}
box(bty = 'l', lwd = 2)


This plot shows that the temporal autocorrelations have a long ‘memory’, i.e. they tend to change slowly over time. In contrast, let’s see what happens when we simulate from a GP with a shorter length scale parameter (the same code as above was used, except the length scale was changed to 4)

These functions are much ‘wigglier’ and can be useful for capturing temporal dynamics with short-term ‘memory’. A big advantage of GPs is that they can easily handle data that are irregularly sampled, much like splines can (and in contrast to autoregressive processes like we’ve been using so far). But although GPs may look similar to splines in the functions they return, they are fundamentally different. In particular, a spline has local knowledge, meaning it has no principled way of understanding how the function itself is evolving across time. But a GP directly models how the function changes over time, which means it is better able to generate out-of-sample predictions.

Inference on GP parameters

Another big advantage of GPs is that their parameters are more directly interpretable. For instance, the length scale of a GP is related to the stationarity (or stability) of the system. We can plot how covariance is expected to change over time to understand this stability. Those of you that have worked with variograms or semi-variograms before when analysing spatial data will be used to visualizing how semivariance decays over distance. Here we will visualize how covariance decays over time distances (i.e. how far apart in time to we need to be before our estimate from the GP no longer depends on another?). We can define a few helper functions to generate these plots, which will accept posterior estimates of GP parameters from a fitted mvgam object, compute GP covariance kernels, calculate posterior quantiles and then plot them:

Click here if you would like to see the R function plot_kernels, used to plot GP kernels
# Compute the covariance kernel for a given draw
# of GP alpha and rho parameters
quad_kernel = function(rho, alpha, times){
  covs <- alpha ^ 2 * 
    exp(-0.5 * ((times / rho) ^ 2))
  return(covs)
}

# Compute kernels for each posterior draw
plot_kernels = function(gp_ests, max_time = 50){
  # set up an empty matrix to store the kernel estimates 
  # for each posterior draw
  draw_seq <- seq(0, max_time, length.out = 100)
  kernels <- matrix(NA, nrow = NROW(gp_ests), ncol = 100)
  
  # use a for-loop to estimate the kernel for each draw using
  # our custom quad_kernel() function
  for(i in 1:NROW(gp_ests)){
    kernels[i,] <- quad_kernel(rho = gp_ests$`rho_gp_time_`[i],
                               alpha = gp_ests$`alpha_gp_time_`[i],
                               times = draw_seq)
  }
  
  # Calculate posterior empirical quantiles of the kernels
  probs <- c(0.05, 0.2, 0.5, 0.8, 0.95)
  cred <- sapply(1:NCOL(kernels),
                 function(n) quantile(kernels[,n],
                                      probs = probs))
  
  # Plot the kernel uncertainty estimates
  pred_vals <- draw_seq
  plot(1, xlim = c(0, max_time), ylim = c(0, max(cred)), type = 'n',
       xlab = 'Time difference', ylab = 'Covariance',
       bty = 'L')
  box(bty = 'L', lwd = 2)
  polygon(c(pred_vals, rev(pred_vals)), c(cred[1,], rev(cred[5,])),
          col = "#DCBCBC", border = NA)
  polygon(c(pred_vals, rev(pred_vals)), c(cred[2,], rev(cred[4,])),
          col = "#B97C7C", border = NA)
  lines(pred_vals, cred[3,], col = "#7C0000", lwd = 2.5)
}


Extract posterior estimates of the GP parameters using the as.data.frame function that we’ve been using previously. You can then plot the posterior kernel estimates for model 5 to visualize how temporal error covariance is estimated to change as two time points are further apart. To do this, all we need to supply is the posterior estimates for the GP parameters in the form of a data.frame:

gp_ests <- as.data.frame(model5, variable = c('rho_gp_time_',
                                              'alpha_gp_time_'))
plot_kernels(gp_ests = gp_ests, max_time = 40)

This can be helpful to understand how long the temporal ‘memory’ of a given GP is, which also relates to how smooth it’s underlying functions are.

But one potential downside is that the length scale parameter \(\rho\) is fundamentally challenging to identify, particularly when estimating a latent Gaussian Process. For both the mvgam and brms models, our estimates of this parameter are somewhat wide:

gp_rhos <- rbind(data.frame(
  rho = as.data.frame(model5, variable = 'rho_gp_time_')$`rho_gp_time_`,
  model = 'mvgam'),
  data.frame(
    rho = as.data.frame(model4, variable = 'lscale_gptime')$lscale_gptime,
    model = 'brms'))

ggplot(data = gp_rhos, aes(x = rho, fill = model)) +
  geom_histogram(aes(y = after_stat(density)), 
                 bins = 40, col = 'white') +
  facet_grid(rows = 'model') +
  theme_bw() +
  theme(legend.position = 'none') +
  scale_fill_manual(values = c('darkblue', 'darkred')) +
  labs(x = expression(rho[GP]), y = 'Frequency') +
  xlim(c(0,24))
## Warning: Removed 9 rows containing non-finite outside the scale range
## (`stat_bin()`).
## Warning: Removed 4 rows containing missing values or values outside the scale range
## (`geom_bar()`).

This doesn’t cause issue in our example, but sometimes this lack of identifiability can cause problems. Imagine a scenario where a very short length scale leads to overly wiggly functions that compete with other temporal processes we’d like to estimate. Often in practice it can be useful to place a fairly restrictive prior that imposes the kind of smoothness we might expect to see. If you are interested in this, I recommend you see this useful (and detailed) case study on GPs by Michael Betancourt.

Exercises

  1. Fit a model that uses a spline of time (using bs = 'bs') and a Negative Binomial family in mvgam for comparisons. Plot the 1st derivative of this temporal spline and describe how (or if) it differs from that of model5
  2. Plot extrapolations from the spline model and take a few notes describing how these differ from the GP predictions in model5
  3. Compare the in-sample predictive accuracies of the two models using loo_compare(). Is either model favoured over the other?
  4. Consider watching the below lecture by Richard McElreath on Gaussian Processes and their wide uses in statistical modelling

Session Info

sessionInfo()
## R version 4.3.1 (2023-06-16 ucrt)
## Platform: x86_64-w64-mingw32/x64 (64-bit)
## Running under: Windows 11 x64 (build 22621)
## 
## Matrix products: default
## 
## 
## locale:
## [1] LC_COLLATE=English_Australia.utf8  LC_CTYPE=English_Australia.utf8   
## [3] LC_MONETARY=English_Australia.utf8 LC_NUMERIC=C                      
## [5] LC_TIME=English_Australia.utf8    
## 
## time zone: Australia/Brisbane
## tzcode source: internal
## 
## attached base packages:
## [1] stats     graphics  grDevices utils     datasets  methods   base     
## 
## other attached packages:
##  [1] ggplot2_3.5.0               tidybayes_3.0.6            
##  [3] mvgam_1.1.0                 insight_0.19.7             
##  [5] marginaleffects_0.17.0.9002 brms_2.21.0                
##  [7] Rcpp_1.0.12                 mgcv_1.8-42                
##  [9] nlme_3.1-162                dplyr_1.1.4                
## [11] downloadthis_0.3.3         
## 
## loaded via a namespace (and not attached):
##  [1] tidyselect_1.2.1     svUnit_1.0.6         farver_2.1.1        
##  [4] scoringRules_1.1.1   loo_2.6.0            fastmap_1.1.1       
##  [7] tensorA_0.36.2.1     digest_0.6.34        timechange_0.2.0    
## [10] mime_0.12            lifecycle_1.0.4      StanHeaders_2.26.28 
## [13] processx_3.8.3       magrittr_2.0.3       posterior_1.5.0     
## [16] compiler_4.3.1       rlang_1.1.3          sass_0.4.8          
## [19] tools_4.3.1          utf8_1.2.4           yaml_2.3.8          
## [22] collapse_2.0.7       data.table_1.14.10   knitr_1.45          
## [25] labeling_0.4.3       bridgesampling_1.1-2 pkgbuild_1.4.3      
## [28] plyr_1.8.9           cmdstanr_0.7.1       abind_1.4-5         
## [31] klippy_0.0.0.9500    withr_3.0.0          purrr_1.0.2         
## [34] grid_4.3.1           stats4_4.3.1         fansi_1.0.6         
## [37] colorspace_2.1-0     inline_0.3.19        scales_1.3.0        
## [40] MASS_7.3-60          cli_3.6.2            mvtnorm_1.2-4       
## [43] rmarkdown_2.25       generics_0.1.3       RcppParallel_5.1.7  
## [46] rstudioapi_0.15.0    reshape2_1.4.4       cachem_1.0.8        
## [49] rstan_2.32.3         stringr_1.5.1        splines_4.3.1       
## [52] bayesplot_1.10.0     assertthat_0.2.1     parallel_4.3.1      
## [55] matrixStats_1.2.0    base64enc_0.1-3      vctrs_0.6.5         
## [58] Matrix_1.6-4         jsonlite_1.8.8       arrayhelpers_1.1-0  
## [61] ggdist_3.3.1         jquerylib_0.1.4      tidyr_1.3.1         
## [64] glue_1.7.0           ps_1.7.5             codetools_0.2-19    
## [67] distributional_0.3.2 lubridate_1.9.3      stringi_1.8.3       
## [70] gtable_0.3.4         QuickJSR_1.0.9       munsell_0.5.0       
## [73] tibble_3.2.1         pillar_1.9.0         htmltools_0.5.7     
## [76] Brobdingnag_1.2-9    R6_2.5.1             evaluate_0.23       
## [79] lattice_0.21-8       highr_0.10           backports_1.4.1     
## [82] bslib_0.6.1          rstantools_2.3.1.1   bsplus_0.1.4        
## [85] zip_2.3.0            coda_0.19-4.1        gridExtra_2.3       
## [88] checkmate_2.3.1      xfun_0.42            fs_1.6.3            
## [91] pkgconfig_2.0.3