Skip to contents

Set up time-varying (dynamic) coefficients for use in mvgam models. Currently, only low-rank Gaussian Process smooths are available for estimating the dynamics of the time-varying coefficient.

Usage

dynamic(variable, k, rho = 5, stationary = TRUE, scale = TRUE)

Arguments

variable

The variable that the dynamic smooth will be a function of

k

Optional number of basis functions for computing approximate GPs. If missing, k will be set as large as possible to accurately estimate the nonlinear function

rho

Either a positive numeric stating the length scale to be used for approximating the squared exponential Gaussian Process smooth (see gp.smooth for details) or missing, in which case the length scale will be estimated by setting up a Hilbert space approximate GP

stationary

Logical. If TRUE (the default) and rho is supplied, the latent Gaussian Process smooth will not have a linear trend component. If FALSE, a linear trend in the covariate is added to the Gaussian Process smooth. Leave at TRUE if you do not believe the coefficient is evolving with much trend, as the linear component of the basis functions can be hard to penalize to zero. This sometimes causes divergence issues in Stan. See gp.smooth for details. Ignored if rho is missing (in which case a Hilbert space approximate GP is used)

scale

Logical; If TRUE (the default) and rho is missing, predictors are scaled so that the maximum Euclidean distance between two points is 1. This often improves sampling speed and convergence. Scaling also affects the estimated length-scale parameters in that they resemble those of scaled predictors (not of the original predictors) if scale is TRUE.

Value

a list object for internal usage in 'mvgam'

Details

mvgam currently sets up dynamic coefficients as low-rank squared exponential Gaussian Process smooths via the call s(time, by = variable, bs = "gp", m = c(2, rho, 2)). These smooths, if specified with reasonable values for the length scale parameter, will give more realistic out of sample forecasts than standard splines such as thin plate or cubic. But the user must set the value for rho, as there is currently no support for estimating this value in mgcv. This may not be too big of a problem, as estimating latent length scales is often difficult anyway. The rho parameter should be thought of as a prior on the smoothness of the latent dynamic coefficient function (where higher values of rho lead to smoother functions with more temporal covariance structure. Values of k are set automatically to ensure enough basis functions are used to approximate the expected wiggliness of the underlying dynamic function (k will increase as rho decreases)

Author

Nicholas J Clark

Examples

# \donttest{
# Simulate a time-varying coefficient
# (as a Gaussian Process with length scale = 10)
set.seed(1111)
N <- 200

# A function to simulate from a squared exponential Gaussian Process
sim_gp = function(N, c, alpha, rho){
 Sigma <- alpha ^ 2 *
          exp(-0.5 * ((outer(1:N, 1:N, "-") / rho) ^ 2)) +
          diag(1e-9, N)
c + mgcv::rmvn(1,
               mu = rep(0, N),
               V = Sigma)
}

beta <- sim_gp(alpha = 0.75,
              rho = 10,
              c = 0.5,
              N = N)
plot(beta, type = 'l', lwd = 3,
    bty = 'l', xlab = 'Time',
    ylab = 'Coefficient',
    col = 'darkred')


# Simulate the predictor as a standard normal
predictor <- rnorm(N, sd = 1)

# Simulate a Gaussian outcome variable
out <- rnorm(N, mean = 4 + beta * predictor,
            sd = 0.25)
time <- seq_along(predictor)
plot(out,  type = 'l', lwd = 3,
    bty = 'l', xlab = 'Time', ylab = 'Outcome',
    col = 'darkred')


# Gather into a data.frame and fit a dynamic coefficient model
data <- data.frame(out, predictor, time)

# Split into training and testing
data_train <- data[1:190,]
data_test <- data[191:200,]

# Fit a model using the dynamic function
mod <- mvgam(out ~
             # mis-specify the length scale slightly as this
             # won't be known in practice
             dynamic(predictor, rho = 8, stationary = TRUE),
            family = gaussian(),
            data = data_train,
            chains = 2)
#> Compiling Stan program using cmdstanr
#> 
#> Start sampling
#> Running MCMC with 2 parallel chains...
#> 
#> Chain 1 Iteration:   1 / 1000 [  0%]  (Warmup) 
#> Chain 2 Iteration:   1 / 1000 [  0%]  (Warmup) 
#> Chain 1 Iteration: 100 / 1000 [ 10%]  (Warmup) 
#> Chain 2 Iteration: 100 / 1000 [ 10%]  (Warmup) 
#> Chain 2 Iteration: 200 / 1000 [ 20%]  (Warmup) 
#> Chain 1 Iteration: 200 / 1000 [ 20%]  (Warmup) 
#> Chain 1 Iteration: 300 / 1000 [ 30%]  (Warmup) 
#> Chain 2 Iteration: 300 / 1000 [ 30%]  (Warmup) 
#> Chain 1 Iteration: 400 / 1000 [ 40%]  (Warmup) 
#> Chain 2 Iteration: 400 / 1000 [ 40%]  (Warmup) 
#> Chain 1 Iteration: 500 / 1000 [ 50%]  (Warmup) 
#> Chain 1 Iteration: 501 / 1000 [ 50%]  (Sampling) 
#> Chain 2 Iteration: 500 / 1000 [ 50%]  (Warmup) 
#> Chain 1 Iteration: 600 / 1000 [ 60%]  (Sampling) 
#> Chain 2 Iteration: 501 / 1000 [ 50%]  (Sampling) 
#> Chain 2 Iteration: 600 / 1000 [ 60%]  (Sampling) 
#> Chain 1 Iteration: 700 / 1000 [ 70%]  (Sampling) 
#> Chain 2 Iteration: 700 / 1000 [ 70%]  (Sampling) 
#> Chain 1 Iteration: 800 / 1000 [ 80%]  (Sampling) 
#> Chain 2 Iteration: 800 / 1000 [ 80%]  (Sampling) 
#> Chain 1 Iteration: 900 / 1000 [ 90%]  (Sampling) 
#> Chain 2 Iteration: 900 / 1000 [ 90%]  (Sampling) 
#> Chain 1 Iteration: 1000 / 1000 [100%]  (Sampling) 
#> Chain 1 finished in 1.8 seconds.
#> Chain 2 Iteration: 1000 / 1000 [100%]  (Sampling) 
#> Chain 2 finished in 1.9 seconds.
#> 
#> Both chains finished successfully.
#> Mean chain execution time: 1.8 seconds.
#> Total execution time: 2.0 seconds.
#> 

# Inspect the summary
summary(mod)
#> GAM formula:
#> out ~ s(time, by = predictor, bs = "gp", m = c(-2, 8, 2), k = 27)
#> <environment: 0x000001340b104778>
#> 
#> Family:
#> gaussian
#> 
#> Link function:
#> identity
#> 
#> Trend model:
#> None
#> 
#> N series:
#> 1 
#> 
#> N timepoints:
#> 190 
#> 
#> Status:
#> Fitted using Stan 
#> 2 chains, each with iter = 1000; warmup = 500; thin = 1 
#> Total post-warmup draws = 1000
#> 
#> 
#> Observation error parameter estimates:
#>              2.5%  50% 97.5% Rhat n_eff
#> sigma_obs[1] 0.21 0.24  0.27    1  1014
#> 
#> GAM coefficient (beta) estimates:
#>                         2.5%    50%  97.5% Rhat n_eff
#> (Intercept)           3.9000  4.000  4.000 1.00  1248
#> s(time):predictor.1  -0.4900 -0.027  0.430 1.00   255
#> s(time):predictor.2   0.6200  0.660  0.710 1.00  1275
#> s(time):predictor.3   0.1700  0.330  0.490 1.00   258
#> s(time):predictor.4  -0.3400 -0.290 -0.240 1.00   883
#> s(time):predictor.5   0.0340  0.140  0.240 1.00   289
#> s(time):predictor.6  -0.7100 -0.670 -0.620 1.00  1037
#> s(time):predictor.7  -0.3600 -0.270 -0.180 1.01   276
#> s(time):predictor.8  -0.2300 -0.180 -0.130 1.00   972
#> s(time):predictor.9   0.1600  0.250  0.330 1.00   350
#> s(time):predictor.10 -0.1900 -0.130 -0.068 1.00  1088
#> s(time):predictor.11  0.0050  0.099  0.190 1.00   420
#> s(time):predictor.12 -0.4400 -0.360 -0.290 1.00   919
#> s(time):predictor.13  0.0047  0.110  0.220 1.00   501
#> s(time):predictor.14 -0.2500 -0.160 -0.075 1.00  1201
#> s(time):predictor.15 -0.0690  0.055  0.180 1.00   602
#> s(time):predictor.16  0.0180  0.130  0.250 1.00   957
#> s(time):predictor.17  0.0910  0.230  0.390 1.00   525
#> s(time):predictor.18 -0.1000  0.044  0.180 1.00  1081
#> s(time):predictor.19  0.0250  0.200  0.360 1.00   772
#> s(time):predictor.20 -0.2400 -0.042  0.150 1.00   965
#> s(time):predictor.21 -0.2100  0.041  0.290 1.00   902
#> s(time):predictor.22 -0.4500 -0.190  0.086 1.00  1139
#> s(time):predictor.23 -0.0520  0.260  0.580 1.00  1134
#> s(time):predictor.24 -0.4300 -0.086  0.260 1.00  1030
#> s(time):predictor.25 -0.2000  0.180  0.560 1.00  1193
#> s(time):predictor.26 -0.6700 -0.220  0.240 1.00   988
#> s(time):predictor.27 -0.0710  0.450  0.960 1.00   255
#> 
#> Approximate significance of GAM smooths:
#>                    edf Ref.df Chi.sq p-value    
#> s(time):predictor 13.8     27    273  <2e-16 ***
#> ---
#> Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
#> 
#> Stan MCMC diagnostics:
#> n_eff / iter looks reasonable for all parameters
#> Rhat looks reasonable for all parameters
#> 0 of 1000 iterations ended with a divergence (0%)
#> 0 of 1000 iterations saturated the maximum tree depth of 12 (0%)
#> E-FMI indicated no pathological behavior
#> 
#> Samples were drawn using NUTS(diag_e) at Thu Aug 29 10:46:15 AM 2024.
#> For each parameter, n_eff is a crude measure of effective sample size,
#> and Rhat is the potential scale reduction factor on split MCMC chains
#> (at convergence, Rhat = 1)

# Plot the time-varying coefficient estimates
plot(mod, type = 'smooths')


# Extrapolate the coefficient forward in time
plot_mvgam_smooth(mod, smooth = 1, newdata = data)
abline(v = 190, lty = 'dashed', lwd = 2)

# Overlay the true simulated time-varying coefficient
lines(beta, lwd = 2.5, col = 'white')
lines(beta, lwd = 2)

# }