Skip to contents

This function lists the parameters that can have their prior distributions changed for a given mvgam model, as well listing their default distributions

Usage

get_mvgam_priors(
  formula,
  trend_formula,
  data,
  data_train,
  family = "poisson",
  knots,
  trend_knots,
  use_lv = FALSE,
  n_lv,
  use_stan = TRUE,
  trend_model = "None",
  trend_map,
  drift = FALSE
)

Arguments

formula

A character string specifying the GAM observation model formula. These are exactly like the formula for a GLM except that smooth terms, s(), te(), ti(), t2(), as well as time-varying <<<<<<< HEAD dynamic() terms and nonparametric gp() terms, can be added to the right hand side ======= dynamic() terms, can be added to the right hand side >>>>>>> 79e31b766e09ac39e22b93cb199f3eb4fda5e08f to specify that the linear predictor depends on smooth functions of predictors (or linear functionals of these). In nmix() family models, the formula is used to set up a linear predictor for the detection probability. Details of the formula syntax used by mvgam can be found in mvgam_formulae

trend_formula

An optional character string specifying the GAM process model formula. If supplied, a linear predictor will be modelled for the latent trends to capture process model evolution separately from the observation model. Should not have a response variable specified on the left-hand side of the formula (i.e. a valid option would be ~ season + s(year)). Also note that you should not use the identifier series in this formula to specify effects that vary across time series. Instead you should use trend. This will ensure that models in which a trend_map is supplied will still work consistently (i.e. by allowing effects to vary across process models, even when some time series share the same underlying process model). This feature is only currently available for RW(), AR() and VAR() trend models. In nmix() family models, the trend_formula is used to set up a linear predictor for the underlying latent abundance. Be aware that it can be very challenging to simultaneously estimate intercept parameters for both the observation mode (captured by formula) and the process model (captured by trend_formula). Users are recommended to drop one of these using the - 1 convention in the formula right hand side.

data

A dataframe or list containing the model response variable and covariates <<<<<<< HEAD required by the GAM formula and optional trend_formula. Most models should include columns:

  • series (a factor index of the series IDs; the number of levels should be identical ======= required by the GAM formula and optional trend_formula. Most models should include columns: #'

    • series (a factor index of the series IDs; the number of levels should be identical >>>>>>> 79e31b766e09ac39e22b93cb199f3eb4fda5e08f to the number of unique series labels (i.e. n_series = length(levels(data$series))))

    • time (numeric or integer index of the time point for each observation). For most dynamic trend types available in mvgam (see argument trend_model), time should be measured in discrete, regularly spaced intervals (i.e. c(1, 2, 3, ...)). However you can use irregularly spaced intervals if using trend_model = CAR(1), though note that any temporal intervals that are exactly 0 will be adjusted to a very small number (1e-12) to prevent sampling errors. See an example of CAR() trends in CAR

    Note however that there are special cases where these identifiers are not needed. For example, models with hierarchical temporal correlation processes (e.g. AR(gr = region, subgr = species)) should NOT include a series identifier, as this will be constructed internally (see mvgam_trends and AR for details). mvgam can also fit models that do not include a time variable if there are no temporal dynamic structures included (i.e. trend_model = 'None' or trend_model = ZMVN()). data should also include any other variables to be included in the linear predictor of formula

data_train

Deprecated. Still works in place of data but users are recommended to use data instead for more seamless integration into R workflows

family

family specifying the exponential observation family for the series. Currently supported families are:

  • gaussian() for real-valued data

  • betar() for proportional data on (0,1)

  • lognormal() for non-negative real-valued data

  • student_t() for real-valued data

  • Gamma() for non-negative real-valued data

  • bernoulli() for binary data

  • poisson() for count data

  • nb() for overdispersed count data

  • binomial() for count data with imperfect detection when the number of trials is known; note that the cbind() function must be used to bind the discrete observations and the discrete number of trials

  • beta_binomial() as for binomial() but allows for overdispersion

  • nmix() for count data with imperfect detection when the number of trials is unknown and should be modeled via a State-Space N-Mixture model. The latent states are Poisson, capturing the 'true' latent abundance, while the observation process is Binomial to account for imperfect detection. See mvgam_families for an example of how to use this family

Note that only nb() and poisson() are available if using JAGS as the backend. Default is poisson(). See mvgam_families for more details

knots

An optional list containing user specified knot values to be used for basis construction. For most bases the user simply supplies the knots to be used, which must match up with the k value supplied (note that the number of knots is not always just k). Different terms can use different numbers of knots, unless they share a covariate

trend_knots

As for knots above, this is an optional list of knot values for smooth functions within the trend_formula

use_lv

logical. If TRUE, use dynamic factors to estimate series' latent trends in a reduced dimension format. Only available for RW(), AR() and GP() trend models. Defaults to FALSE

n_lv

integer the number of latent dynamic factors to use if use_lv == TRUE. Cannot be > n_series. Defaults arbitrarily to min(2, floor(n_series / 2))

use_stan

Logical. If TRUE, the model will be compiled and sampled using Hamiltonian Monte Carlo with a call to cmdstan_model or a call to stan. Note that there are many more options when using Stan vs JAGS

trend_model

character or function specifying the time series dynamics for the latent trend. Options are:

  • None (no latent trend component; i.e. the GAM component is all that contributes to the linear predictor, and the observation process is the only source of error; similarly to what is estimated by gam)

  • ZMVN or ZMVN() (Zero-Mean Multivariate Normal; only available in Stan)

  • 'RW' or RW()

  • 'AR1' or AR(p = 1)

  • 'AR2' or AR(p = 2)

  • 'AR3' or AR(p = 3)

  • 'CAR1' or CAR(p = 1)

  • 'VAR1' or VAR()(only available in Stan)

  • 'PWlogistic, 'PWlinear' or PW() (only available in Stan)

  • 'GP' or GP() (Gaussian Process with squared exponential kernel; only available in Stan)

For all trend types apart from ZMVN(), GP(), CAR() and PW(), moving average and/or correlated process error terms can also be estimated (for example, RW(cor = TRUE) will set up a multivariate Random Walk if n_series > 1). It is also possible for many multivariate trends to estimate hierarchical correlations if the data are structured among levels of a relevant grouping factor. See mvgam_trends for more details and see ZMVN for an example.

trend_map

Optional data.frame specifying which series should depend on which latent trends. Useful for allowing multiple series to depend on the same latent trend process, but with different observation processes. If supplied, a latent factor model is set up by setting use_lv = TRUE and using the mapping to set up the shared trends. Needs to have column names series and trend, with integer values in the trend column to state which trend each series should depend on. The series column should have a single unique entry for each series in the data (names should perfectly match factor levels of the series variable in data). Note that if this is supplied, the intercept parameter in the process model will NOT be automatically suppressed. See examples for details

drift

Deprecated. If you wish to estimate drift parameters, include parametric fixed effects of 'time' in your formulae instead.

Value

either a data.frame containing the prior definitions (if any suitable priors can be altered by the user) or NULL, indicating that no priors in the model can be modified through the mvgam interface

Details

Users can supply a model formula, prior to fitting the model, so that default priors can be inspected and altered. To make alterations, change the contents of the prior column and supplying this data.frame to the mvgam function using the argument priors. If using Stan as the backend, users can also modify the parameter bounds by modifying the new_lowerbound and/or new_upperbound columns. This will be necessary if using restrictive distributions on some parameters, such as a Beta distribution for the trend sd parameters for example (Beta only has support on (0,1)), so the upperbound cannot be above 1. Another option is to make use of the prior modification functions in brms (i.e. prior) to change prior distributions and bounds (just use the name of the parameter that you'd like to change as the class argument; see examples below)

Note

Only the prior, new_lowerbound and/or new_upperbound columns of the output should be altered when defining the user-defined priors for the mvgam model. Use only if you are familiar with the underlying probabilistic programming language. There are no sanity checks done to ensure that the code is legal (i.e. to check that lower bounds are smaller than upper bounds, for example)

Author

Nicholas J Clark

Examples

# \donttest{
# Simulate three integer-valued time series
library(mvgam)
dat <- sim_mvgam(trend_rel = 0.5)

# Get a model file that uses default mvgam priors for inspection (not always necessary,
# but this can be useful for testing whether your updated priors are written correctly)
mod_default <- mvgam(y ~ s(series, bs = 're') +
              s(season, bs = 'cc') - 1,
              family = nb(),
              data = dat$data_train,
              trend_model = AR(p = 2),
              run_model = FALSE)
#> Your model may benefit from using "noncentred = TRUE"

# Inspect the model file with default mvgam priors
code(mod_default)
#> // Stan model code generated by package mvgam
#> functions {
#>   vector rep_each(vector x, int K) {
#>     int N = rows(x);
#>     vector[N * K] y;
#>     int pos = 1;
#>     for (n in 1 : N) {
#>       for (k in 1 : K) {
#>         y[pos] = x[n];
#>         pos += 1;
#>       }
#>     }
#>     return y;
#>   }
#> }
#> data {
#>   int<lower=0> total_obs; // total number of observations
#>   int<lower=0> n; // number of timepoints per series
#>   int<lower=0> n_sp; // number of smoothing parameters
#>   int<lower=0> n_series; // number of series
#>   int<lower=0> num_basis; // total number of basis coefficients
#>   vector[num_basis] zero; // prior locations for basis coefficients
#>   matrix[total_obs, num_basis] X; // mgcv GAM design matrix
#>   array[n, n_series] int<lower=0> ytimes; // time-ordered matrix (which col in X belongs to each [time, series] observation?)
#>   matrix[8, 8] S1; // mgcv smooth penalty matrix S1
#>   int<lower=0> n_nonmissing; // number of nonmissing observations
#>   array[n_nonmissing] int<lower=0> flat_ys; // flattened nonmissing observations
#>   matrix[n_nonmissing, num_basis] flat_xs; // X values for nonmissing observations
#>   array[n_nonmissing] int<lower=0> obs_ind; // indices of nonmissing observations
#> }
#> parameters {
#>   // raw basis coefficients
#>   vector[num_basis] b_raw;
#>   
#>   // random effect variances
#>   vector<lower=0>[1] sigma_raw;
#>   
#>   // random effect means
#>   vector[1] mu_raw;
#>   
#>   // negative binomial overdispersion
#>   vector<lower=0>[n_series] phi_inv;
#>   
#>   // latent trend AR1 terms
#>   vector<lower=-1, upper=1>[n_series] ar1;
#>   
#>   // latent trend AR2 terms
#>   vector<lower=-1, upper=1>[n_series] ar2;
#>   
#>   // latent trend variance parameters
#>   vector<lower=0>[n_series] sigma;
#>   
#>   // latent trends
#>   matrix[n, n_series] trend;
#>   
#>   // smoothing parameters
#>   vector<lower=0>[n_sp] lambda;
#> }
#> transformed parameters {
#>   // basis coefficients
#>   vector[num_basis] b;
#>   b[1 : 8] = b_raw[1 : 8];
#>   b[9 : 11] = mu_raw[1] + b_raw[9 : 11] * sigma_raw[1];
#> }
#> model {
#>   // prior for random effect population variances
#>   sigma_raw ~ student_t(3, 0, 2.5);
#>   
#>   // prior for random effect population means
#>   mu_raw ~ std_normal();
#>   
#>   // prior for s(season)...
#>   b_raw[1 : 8] ~ multi_normal_prec(zero[1 : 8], S1[1 : 8, 1 : 8] * lambda[1]);
#>   
#>   // prior (non-centred) for s(series)...
#>   b_raw[9 : 11] ~ std_normal();
#>   
#>   // priors for AR parameters
#>   ar1 ~ std_normal();
#>   ar2 ~ std_normal();
#>   
#>   // priors for smoothing parameters
#>   lambda ~ normal(5, 30);
#>   
#>   // priors for overdispersion parameters
#>   phi_inv ~ student_t(3, 0, 0.1);
#>   
#>   // priors for latent trend variance parameters
#>   sigma ~ student_t(3, 0, 2.5);
#>   
#>   // trend estimates
#>   trend[1, 1 : n_series] ~ normal(0, sigma);
#>   trend[2, 1 : n_series] ~ normal(trend[1, 1 : n_series] * ar1, sigma);
#>   for (s in 1 : n_series) {
#>     trend[3 : n, s] ~ normal(ar1[s] * trend[2 : (n - 1), s]
#>                              + ar2[s] * trend[1 : (n - 2), s], sigma[s]);
#>   }
#>   {
#>     // likelihood functions
#>     vector[n_nonmissing] flat_trends;
#>     array[n_nonmissing] real flat_phis;
#>     flat_trends = to_vector(trend)[obs_ind];
#>     flat_phis = to_array_1d(rep_each(phi_inv, n)[obs_ind]);
#>     flat_ys ~ neg_binomial_2(exp(append_col(flat_xs, flat_trends)
#>                                  * append_row(b, 1.0)),
#>                              inv(flat_phis));
#>   }
#> }
#> generated quantities {
#>   vector[total_obs] eta;
#>   matrix[n, n_series] mus;
#>   vector[n_sp] rho;
#>   vector[n_series] tau;
#>   array[n, n_series] int ypred;
#>   matrix[n, n_series] phi_vec;
#>   vector[n_series] phi;
#>   phi = inv(phi_inv);
#>   for (s in 1 : n_series) {
#>     phi_vec[1 : n, s] = rep_vector(phi[s], n);
#>   }
#>   rho = log(lambda);
#>   for (s in 1 : n_series) {
#>     tau[s] = pow(sigma[s], -2.0);
#>   }
#>   
#>   // posterior predictions
#>   eta = X * b;
#>   for (s in 1 : n_series) {
#>     mus[1 : n, s] = eta[ytimes[1 : n, s]] + trend[1 : n, s];
#>     ypred[1 : n, s] = neg_binomial_2_rng(exp(mus[1 : n, s]), phi_vec[1 : n, s]);
#>   }
#> }
#> 
#> 

# Look at which priors can be updated in mvgam
test_priors <- get_mvgam_priors(y ~ s(series, bs = 're') +
                              s(season, bs = 'cc') - 1,
                              family = nb(),
                              data = dat$data_train,
                              trend_model = AR(p = 2))
test_priors
#>                                param_name param_length
#> 1           vector<lower=0>[n_sp] lambda;            2
#> 2                       vector[1] mu_raw;            1
#> 3           vector<lower=0>[1] sigma_raw;            1
#> 4 vector<lower=-1,upper=1>[n_series] ar1;            3
#> 5 vector<lower=-1,upper=1>[n_series] ar2;            3
#> 6        vector<lower=0>[n_series] sigma;            3
#> 7      vector<lower=0>[n_series] phi_inv;            3
#>                    param_info                             prior
#> 1 s(season) smooth parameters           lambda ~ normal(5, 30);
#> 2          s(series) pop mean            mu_raw ~ std_normal();
#> 3            s(series) pop sd sigma_raw ~ student_t(3, 0, 2.5);
#> 4       trend AR1 coefficient               ar1 ~ std_normal();
#> 5       trend AR2 coefficient               ar2 ~ std_normal();
#> 6                    trend sd     sigma ~ student_t(3, 0, 2.5);
#> 7   inverse of NB dispsersion   phi_inv ~ student_t(3, 0, 0.1);
#>                   example_change new_lowerbound new_upperbound
<<<<<<< HEAD
#> 1    lambda ~ exponential(0.53);             NA             NA
#> 2    mu_raw ~ normal(0.3, 0.11);             NA             NA
#> 3 sigma_raw ~ exponential(0.34);             NA             NA
#> 4     ar1 ~ normal(-0.87, 0.84);             NA             NA
#> 5      ar2 ~ normal(0.56, 0.76);             NA             NA
#> 6     sigma ~ exponential(0.88);             NA             NA
#> 7  phi_inv ~ normal(0.81, 0.92);             NA             NA
=======
#> 1    lambda ~ exponential(0.27);             NA             NA
#> 2   mu_raw ~ normal(0.58, 0.14);             NA             NA
#> 3 sigma_raw ~ exponential(0.42);             NA             NA
#> 4      ar1 ~ normal(-0.22, 0.6);             NA             NA
#> 5     ar2 ~ normal(-0.92, 0.63);             NA             NA
#> 6     sigma ~ exponential(0.32);             NA             NA
#> 7  phi_inv ~ normal(0.36, 0.79);             NA             NA
>>>>>>> 79e31b766e09ac39e22b93cb199f3eb4fda5e08f

# Make a few changes; first, change the population mean for the series-level
# random intercepts
test_priors$prior[2] <- 'mu_raw ~ normal(0.2, 0.5);'

# Now use stronger regularisation for the series-level AR2 coefficients
test_priors$prior[5] <- 'ar2 ~ normal(0, 0.25);'

# Check that the changes are made to the model file without any warnings by
# setting 'run_model = FALSE'
mod <- mvgam(y ~ s(series, bs = 're') +
            s(season, bs = 'cc') - 1,
            family = nb(),
            data = dat$data_train,
            trend_model = AR(p = 2),
            priors = test_priors,
            run_model = FALSE)
#> Your model may benefit from using "noncentred = TRUE"
code(mod)
#> // Stan model code generated by package mvgam
#> functions {
#>   vector rep_each(vector x, int K) {
#>     int N = rows(x);
#>     vector[N * K] y;
#>     int pos = 1;
#>     for (n in 1 : N) {
#>       for (k in 1 : K) {
#>         y[pos] = x[n];
#>         pos += 1;
#>       }
#>     }
#>     return y;
#>   }
#> }
#> data {
#>   int<lower=0> total_obs; // total number of observations
#>   int<lower=0> n; // number of timepoints per series
#>   int<lower=0> n_sp; // number of smoothing parameters
#>   int<lower=0> n_series; // number of series
#>   int<lower=0> num_basis; // total number of basis coefficients
#>   vector[num_basis] zero; // prior locations for basis coefficients
#>   matrix[total_obs, num_basis] X; // mgcv GAM design matrix
#>   array[n, n_series] int<lower=0> ytimes; // time-ordered matrix (which col in X belongs to each [time, series] observation?)
#>   matrix[8, 8] S1; // mgcv smooth penalty matrix S1
#>   int<lower=0> n_nonmissing; // number of nonmissing observations
#>   array[n_nonmissing] int<lower=0> flat_ys; // flattened nonmissing observations
#>   matrix[n_nonmissing, num_basis] flat_xs; // X values for nonmissing observations
#>   array[n_nonmissing] int<lower=0> obs_ind; // indices of nonmissing observations
#> }
#> parameters {
#>   // raw basis coefficients
#>   vector[num_basis] b_raw;
#>   
#>   // random effect variances
#>   vector<lower=0>[1] sigma_raw;
#>   
#>   // random effect means
#>   vector[1] mu_raw;
#>   
#>   // negative binomial overdispersion
#>   vector<lower=0>[n_series] phi_inv;
#>   
#>   // latent trend AR1 terms
#>   vector<lower=-1, upper=1>[n_series] ar1;
#>   
#>   // latent trend AR2 terms
#>   vector<lower=-1, upper=1>[n_series] ar2;
#>   
#>   // latent trend variance parameters
#>   vector<lower=0>[n_series] sigma;
#>   
#>   // latent trends
#>   matrix[n, n_series] trend;
#>   
#>   // smoothing parameters
#>   vector<lower=0>[n_sp] lambda;
#> }
#> transformed parameters {
#>   // basis coefficients
#>   vector[num_basis] b;
#>   b[1 : 8] = b_raw[1 : 8];
#>   b[9 : 11] = mu_raw[1] + b_raw[9 : 11] * sigma_raw[1];
#> }
#> model {
#>   // prior for random effect population variances
#>   sigma_raw ~ student_t(3, 0, 2.5);
#>   
#>   // prior for random effect population means
#>   mu_raw ~ normal(0.2, 0.5);
#>   
#>   // prior for s(season)...
#>   b_raw[1 : 8] ~ multi_normal_prec(zero[1 : 8], S1[1 : 8, 1 : 8] * lambda[1]);
#>   
#>   // prior (non-centred) for s(series)...
#>   b_raw[9 : 11] ~ std_normal();
#>   
#>   // priors for AR parameters
#>   ar1 ~ std_normal();
#>   ar2 ~ normal(0, 0.25);
#>   
#>   // priors for smoothing parameters
#>   lambda ~ normal(5, 30);
#>   
#>   // priors for overdispersion parameters
#>   phi_inv ~ student_t(3, 0, 0.1);
#>   
#>   // priors for latent trend variance parameters
#>   sigma ~ student_t(3, 0, 2.5);
#>   
#>   // trend estimates
#>   trend[1, 1 : n_series] ~ normal(0, sigma);
#>   trend[2, 1 : n_series] ~ normal(trend[1, 1 : n_series] * ar1, sigma);
#>   for (s in 1 : n_series) {
#>     trend[3 : n, s] ~ normal(ar1[s] * trend[2 : (n - 1), s]
#>                              + ar2[s] * trend[1 : (n - 2), s], sigma[s]);
#>   }
#>   {
#>     // likelihood functions
#>     vector[n_nonmissing] flat_trends;
#>     array[n_nonmissing] real flat_phis;
#>     flat_trends = to_vector(trend)[obs_ind];
#>     flat_phis = to_array_1d(rep_each(phi_inv, n)[obs_ind]);
#>     flat_ys ~ neg_binomial_2(exp(append_col(flat_xs, flat_trends)
#>                                  * append_row(b, 1.0)),
#>                              inv(flat_phis));
#>   }
#> }
#> generated quantities {
#>   vector[total_obs] eta;
#>   matrix[n, n_series] mus;
#>   vector[n_sp] rho;
#>   vector[n_series] tau;
#>   array[n, n_series] int ypred;
#>   matrix[n, n_series] phi_vec;
#>   vector[n_series] phi;
#>   phi = inv(phi_inv);
#>   for (s in 1 : n_series) {
#>     phi_vec[1 : n, s] = rep_vector(phi[s], n);
#>   }
#>   rho = log(lambda);
#>   for (s in 1 : n_series) {
#>     tau[s] = pow(sigma[s], -2.0);
#>   }
#>   
#>   // posterior predictions
#>   eta = X * b;
#>   for (s in 1 : n_series) {
#>     mus[1 : n, s] = eta[ytimes[1 : n, s]] + trend[1 : n, s];
#>     ypred[1 : n, s] = neg_binomial_2_rng(exp(mus[1 : n, s]), phi_vec[1 : n, s]);
#>   }
#> }
#> 
#> 

# No warnings, the model is ready for fitting now in the usual way with the addition
# of the 'priors' argument

# The same can be done using 'brms' functions; here we will also change the ar1 prior
# and put some bounds on the ar coefficients to enforce stationarity; we set the
# prior using the 'class' argument in all brms prior functions
brmsprior <- c(prior(normal(0.2, 0.5), class = mu_raw),
              prior(normal(0, 0.25), class = ar1, lb = -1, ub = 1),
              prior(normal(0, 0.25), class = ar2, lb = -1, ub = 1))
brmsprior
#>             prior  class coef group resp dpar nlpar   lb   ub source
#>  normal(0.2, 0.5) mu_raw                            <NA> <NA>   user
#>   normal(0, 0.25)    ar1                              -1    1   user
#>   normal(0, 0.25)    ar2                              -1    1   user

mod <- mvgam(y ~ s(series, bs = 're') +
            s(season, bs = 'cc') - 1,
          family = nb(),
          data = dat$data_train,
          trend_model = AR(p = 2),
          priors = brmsprior,
          run_model = FALSE)
#> Your model may benefit from using "noncentred = TRUE"
code(mod)
#> // Stan model code generated by package mvgam
#> functions {
#>   vector rep_each(vector x, int K) {
#>     int N = rows(x);
#>     vector[N * K] y;
#>     int pos = 1;
#>     for (n in 1 : N) {
#>       for (k in 1 : K) {
#>         y[pos] = x[n];
#>         pos += 1;
#>       }
#>     }
#>     return y;
#>   }
#> }
#> data {
#>   int<lower=0> total_obs; // total number of observations
#>   int<lower=0> n; // number of timepoints per series
#>   int<lower=0> n_sp; // number of smoothing parameters
#>   int<lower=0> n_series; // number of series
#>   int<lower=0> num_basis; // total number of basis coefficients
#>   vector[num_basis] zero; // prior locations for basis coefficients
#>   matrix[total_obs, num_basis] X; // mgcv GAM design matrix
#>   array[n, n_series] int<lower=0> ytimes; // time-ordered matrix (which col in X belongs to each [time, series] observation?)
#>   matrix[8, 8] S1; // mgcv smooth penalty matrix S1
#>   int<lower=0> n_nonmissing; // number of nonmissing observations
#>   array[n_nonmissing] int<lower=0> flat_ys; // flattened nonmissing observations
#>   matrix[n_nonmissing, num_basis] flat_xs; // X values for nonmissing observations
#>   array[n_nonmissing] int<lower=0> obs_ind; // indices of nonmissing observations
#> }
#> parameters {
#>   // raw basis coefficients
#>   vector[num_basis] b_raw;
#>   
#>   // random effect variances
#>   vector<lower=0>[1] sigma_raw;
#>   
#>   // random effect means
#>   vector[1] mu_raw;
#>   
#>   // negative binomial overdispersion
#>   vector<lower=0>[n_series] phi_inv;
#>   
#>   // latent trend AR1 terms
#>   vector<lower=-1, upper=1>[n_series] ar1;
#>   
#>   // latent trend AR2 terms
#>   vector<lower=-1, upper=1>[n_series] ar2;
#>   
#>   // latent trend variance parameters
#>   vector<lower=0>[n_series] sigma;
#>   
#>   // latent trends
#>   matrix[n, n_series] trend;
#>   
#>   // smoothing parameters
#>   vector<lower=0>[n_sp] lambda;
#> }
#> transformed parameters {
#>   // basis coefficients
#>   vector[num_basis] b;
#>   b[1 : 8] = b_raw[1 : 8];
#>   b[9 : 11] = mu_raw[1] + b_raw[9 : 11] * sigma_raw[1];
#> }
#> model {
#>   // prior for random effect population variances
#>   sigma_raw ~ student_t(3, 0, 2.5);
#>   
#>   // prior for random effect population means
#>   mu_raw ~ normal(0.2, 0.5);
#>   
#>   // prior for s(season)...
#>   b_raw[1 : 8] ~ multi_normal_prec(zero[1 : 8], S1[1 : 8, 1 : 8] * lambda[1]);
#>   
#>   // prior (non-centred) for s(series)...
#>   b_raw[9 : 11] ~ std_normal();
#>   
#>   // priors for AR parameters
#>   ar1 ~ normal(0, 0.25);
#>   ar2 ~ normal(0, 0.25);
#>   
#>   // priors for smoothing parameters
#>   lambda ~ normal(5, 30);
#>   
#>   // priors for overdispersion parameters
#>   phi_inv ~ student_t(3, 0, 0.1);
#>   
#>   // priors for latent trend variance parameters
#>   sigma ~ student_t(3, 0, 2.5);
#>   
#>   // trend estimates
#>   trend[1, 1 : n_series] ~ normal(0, sigma);
#>   trend[2, 1 : n_series] ~ normal(trend[1, 1 : n_series] * ar1, sigma);
#>   for (s in 1 : n_series) {
#>     trend[3 : n, s] ~ normal(ar1[s] * trend[2 : (n - 1), s]
#>                              + ar2[s] * trend[1 : (n - 2), s], sigma[s]);
#>   }
#>   {
#>     // likelihood functions
#>     vector[n_nonmissing] flat_trends;
#>     array[n_nonmissing] real flat_phis;
#>     flat_trends = to_vector(trend)[obs_ind];
#>     flat_phis = to_array_1d(rep_each(phi_inv, n)[obs_ind]);
#>     flat_ys ~ neg_binomial_2(exp(append_col(flat_xs, flat_trends)
#>                                  * append_row(b, 1.0)),
#>                              inv(flat_phis));
#>   }
#> }
#> generated quantities {
#>   vector[total_obs] eta;
#>   matrix[n, n_series] mus;
#>   vector[n_sp] rho;
#>   vector[n_series] tau;
#>   array[n, n_series] int ypred;
#>   matrix[n, n_series] phi_vec;
#>   vector[n_series] phi;
#>   phi = inv(phi_inv);
#>   for (s in 1 : n_series) {
#>     phi_vec[1 : n, s] = rep_vector(phi[s], n);
#>   }
#>   rho = log(lambda);
#>   for (s in 1 : n_series) {
#>     tau[s] = pow(sigma[s], -2.0);
#>   }
#>   
#>   // posterior predictions
#>   eta = X * b;
#>   for (s in 1 : n_series) {
#>     mus[1 : n, s] = eta[ytimes[1 : n, s]] + trend[1 : n, s];
#>     ypred[1 : n, s] = neg_binomial_2_rng(exp(mus[1 : n, s]), phi_vec[1 : n, s]);
#>   }
#> }
#> 
#> 

# Look at what is returned when an incorrect spelling is used
test_priors$prior[5] <- 'ar2_bananas ~ normal(0, 0.25);'
mod <- mvgam(y ~ s(series, bs = 're') +
            s(season, bs = 'cc') - 1,
            family = nb(),
            data = dat$data_train,
            trend_model = AR(p = 2),
            priors = test_priors,
            run_model = FALSE)
#> Warning: no match found in model_file for parameter: ar2_bananas
#> Your model may benefit from using "noncentred = TRUE"
code(mod)
#> // Stan model code generated by package mvgam
#> functions {
#>   vector rep_each(vector x, int K) {
#>     int N = rows(x);
#>     vector[N * K] y;
#>     int pos = 1;
#>     for (n in 1 : N) {
#>       for (k in 1 : K) {
#>         y[pos] = x[n];
#>         pos += 1;
#>       }
#>     }
#>     return y;
#>   }
#> }
#> data {
#>   int<lower=0> total_obs; // total number of observations
#>   int<lower=0> n; // number of timepoints per series
#>   int<lower=0> n_sp; // number of smoothing parameters
#>   int<lower=0> n_series; // number of series
#>   int<lower=0> num_basis; // total number of basis coefficients
#>   vector[num_basis] zero; // prior locations for basis coefficients
#>   matrix[total_obs, num_basis] X; // mgcv GAM design matrix
#>   array[n, n_series] int<lower=0> ytimes; // time-ordered matrix (which col in X belongs to each [time, series] observation?)
#>   matrix[8, 8] S1; // mgcv smooth penalty matrix S1
#>   int<lower=0> n_nonmissing; // number of nonmissing observations
#>   array[n_nonmissing] int<lower=0> flat_ys; // flattened nonmissing observations
#>   matrix[n_nonmissing, num_basis] flat_xs; // X values for nonmissing observations
#>   array[n_nonmissing] int<lower=0> obs_ind; // indices of nonmissing observations
#> }
#> parameters {
#>   // raw basis coefficients
#>   vector[num_basis] b_raw;
#>   
#>   // random effect variances
#>   vector<lower=0>[1] sigma_raw;
#>   
#>   // random effect means
#>   vector[1] mu_raw;
#>   
#>   // negative binomial overdispersion
#>   vector<lower=0>[n_series] phi_inv;
#>   
#>   // latent trend AR1 terms
#>   vector<lower=-1, upper=1>[n_series] ar1;
#>   
#>   // latent trend AR2 terms
#>   vector<lower=-1, upper=1>[n_series] ar2;
#>   
#>   // latent trend variance parameters
#>   vector<lower=0>[n_series] sigma;
#>   
#>   // latent trends
#>   matrix[n, n_series] trend;
#>   
#>   // smoothing parameters
#>   vector<lower=0>[n_sp] lambda;
#> }
#> transformed parameters {
#>   // basis coefficients
#>   vector[num_basis] b;
#>   b[1 : 8] = b_raw[1 : 8];
#>   b[9 : 11] = mu_raw[1] + b_raw[9 : 11] * sigma_raw[1];
#> }
#> model {
#>   // prior for random effect population variances
#>   sigma_raw ~ student_t(3, 0, 2.5);
#>   
#>   // prior for random effect population means
#>   mu_raw ~ normal(0.2, 0.5);
#>   
#>   // prior for s(season)...
#>   b_raw[1 : 8] ~ multi_normal_prec(zero[1 : 8], S1[1 : 8, 1 : 8] * lambda[1]);
#>   
#>   // prior (non-centred) for s(series)...
#>   b_raw[9 : 11] ~ std_normal();
#>   
#>   // priors for AR parameters
#>   ar1 ~ std_normal();
#>   ar2 ~ std_normal();
#>   
#>   // priors for smoothing parameters
#>   lambda ~ normal(5, 30);
#>   
#>   // priors for overdispersion parameters
#>   phi_inv ~ student_t(3, 0, 0.1);
#>   
#>   // priors for latent trend variance parameters
#>   sigma ~ student_t(3, 0, 2.5);
#>   
#>   // trend estimates
#>   trend[1, 1 : n_series] ~ normal(0, sigma);
#>   trend[2, 1 : n_series] ~ normal(trend[1, 1 : n_series] * ar1, sigma);
#>   for (s in 1 : n_series) {
#>     trend[3 : n, s] ~ normal(ar1[s] * trend[2 : (n - 1), s]
#>                              + ar2[s] * trend[1 : (n - 2), s], sigma[s]);
#>   }
#>   {
#>     // likelihood functions
#>     vector[n_nonmissing] flat_trends;
#>     array[n_nonmissing] real flat_phis;
#>     flat_trends = to_vector(trend)[obs_ind];
#>     flat_phis = to_array_1d(rep_each(phi_inv, n)[obs_ind]);
#>     flat_ys ~ neg_binomial_2(exp(append_col(flat_xs, flat_trends)
#>                                  * append_row(b, 1.0)),
#>                              inv(flat_phis));
#>   }
#> }
#> generated quantities {
#>   vector[total_obs] eta;
#>   matrix[n, n_series] mus;
#>   vector[n_sp] rho;
#>   vector[n_series] tau;
#>   array[n, n_series] int ypred;
#>   matrix[n, n_series] phi_vec;
#>   vector[n_series] phi;
#>   phi = inv(phi_inv);
#>   for (s in 1 : n_series) {
#>     phi_vec[1 : n, s] = rep_vector(phi[s], n);
#>   }
#>   rho = log(lambda);
#>   for (s in 1 : n_series) {
#>     tau[s] = pow(sigma[s], -2.0);
#>   }
#>   
#>   // posterior predictions
#>   eta = X * b;
#>   for (s in 1 : n_series) {
#>     mus[1 : n, s] = eta[ytimes[1 : n, s]] + trend[1 : n, s];
#>     ypred[1 : n, s] = neg_binomial_2_rng(exp(mus[1 : n, s]), phi_vec[1 : n, s]);
#>   }
#> }
#> 
#> 

# Example of changing parametric (fixed effect) priors
simdat <- sim_mvgam()

# Add a fake covariate
simdat$data_train$cov <- rnorm(NROW(simdat$data_train))

priors <- get_mvgam_priors(y ~ cov + s(season),
                          data = simdat$data_train,
                          family = poisson(),
                          trend_model = AR())

# Change priors for the intercept and fake covariate effects
priors$prior[1] <- '(Intercept) ~ normal(0, 1);'
priors$prior[2] <- 'cov ~ normal(0, 0.1);'

mod2 <- mvgam(y ~ cov + s(season),
             data = simdat$data_train,
             trend_model = AR(),
             family = poisson(),
             priors = priors,
             run_model = FALSE)
#> Your model may benefit from using "noncentred = TRUE"
code(mod2)
#> // Stan model code generated by package mvgam
#> data {
#>   int<lower=0> total_obs; // total number of observations
#>   int<lower=0> n; // number of timepoints per series
#>   int<lower=0> n_sp; // number of smoothing parameters
#>   int<lower=0> n_series; // number of series
#>   int<lower=0> num_basis; // total number of basis coefficients
#>   vector[num_basis] zero; // prior locations for basis coefficients
#>   matrix[total_obs, num_basis] X; // mgcv GAM design matrix
#>   array[n, n_series] int<lower=0> ytimes; // time-ordered matrix (which col in X belongs to each [time, series] observation?)
#>   matrix[9, 18] S1; // mgcv smooth penalty matrix S1
#>   int<lower=0> n_nonmissing; // number of nonmissing observations
#>   array[n_nonmissing] int<lower=0> flat_ys; // flattened nonmissing observations
#>   matrix[n_nonmissing, num_basis] flat_xs; // X values for nonmissing observations
#>   array[n_nonmissing] int<lower=0> obs_ind; // indices of nonmissing observations
#> }
#> parameters {
#>   // raw basis coefficients
#>   vector[num_basis] b_raw;
#>   
#>   // latent trend AR1 terms
#>   vector<lower=-1, upper=1>[n_series] ar1;
#>   
#>   // latent trend variance parameters
#>   vector<lower=0>[n_series] sigma;
#>   
#>   // latent trends
#>   matrix[n, n_series] trend;
#>   
#>   // smoothing parameters
#>   vector<lower=0>[n_sp] lambda;
#> }
#> transformed parameters {
#>   // basis coefficients
#>   vector[num_basis] b;
#>   b[1 : num_basis] = b_raw[1 : num_basis];
#> }
#> model {
#>   // prior for (Intercept)...
#>   b_raw[1] ~ normal(0, 1);
#>   
#>   // prior for cov...
#>   b_raw[2] ~ normal(0, 0.1);
#>   
#>   // prior for s(season)...
#>   b_raw[3 : 11] ~ multi_normal_prec(zero[3 : 11],
#>                                     S1[1 : 9, 1 : 9] * lambda[1]
#>                                     + S1[1 : 9, 10 : 18] * lambda[2]);
#>   
#>   // priors for AR parameters
#>   ar1 ~ std_normal();
#>   
#>   // priors for smoothing parameters
#>   lambda ~ normal(5, 30);
#>   
#>   // priors for latent trend variance parameters
#>   sigma ~ student_t(3, 0, 2.5);
#>   
#>   // trend estimates
#>   trend[1, 1 : n_series] ~ normal(0, sigma);
#>   for (s in 1 : n_series) {
#>     trend[2 : n, s] ~ normal(ar1[s] * trend[1 : (n - 1), s], sigma[s]);
#>   }
#>   {
#>     // likelihood functions
#>     vector[n_nonmissing] flat_trends;
#>     flat_trends = to_vector(trend)[obs_ind];
#>     flat_ys ~ poisson_log_glm(append_col(flat_xs, flat_trends), 0.0,
#>                               append_row(b, 1.0));
#>   }
#> }
#> generated quantities {
#>   vector[total_obs] eta;
#>   matrix[n, n_series] mus;
#>   vector[n_sp] rho;
#>   vector[n_series] tau;
#>   array[n, n_series] int ypred;
#>   rho = log(lambda);
#>   for (s in 1 : n_series) {
#>     tau[s] = pow(sigma[s], -2.0);
#>   }
#>   
#>   // posterior predictions
#>   eta = X * b;
#>   for (s in 1 : n_series) {
#>     mus[1 : n, s] = eta[ytimes[1 : n, s]] + trend[1 : n, s];
#>     ypred[1 : n, s] = poisson_log_rng(mus[1 : n, s]);
#>   }
#> }
#> 
#> 

# Likewise using 'brms' utilities (note that you can use
# Intercept rather than `(Intercept)`) to change priors on the intercept
brmsprior <- c(prior(normal(0.2, 0.5), class = cov),
              prior(normal(0, 0.25), class = Intercept))
brmsprior
#>             prior     class coef group resp dpar nlpar   lb   ub source
#>  normal(0.2, 0.5)       cov                            <NA> <NA>   user
#>   normal(0, 0.25) Intercept                            <NA> <NA>   user

mod2 <- mvgam(y ~ cov + s(season),
             data = simdat$data_train,
             trend_model = AR(),
             family = poisson(),
             priors = brmsprior,
             run_model = FALSE)
#> Your model may benefit from using "noncentred = TRUE"
code(mod2)
#> // Stan model code generated by package mvgam
#> data {
#>   int<lower=0> total_obs; // total number of observations
#>   int<lower=0> n; // number of timepoints per series
#>   int<lower=0> n_sp; // number of smoothing parameters
#>   int<lower=0> n_series; // number of series
#>   int<lower=0> num_basis; // total number of basis coefficients
#>   vector[num_basis] zero; // prior locations for basis coefficients
#>   matrix[total_obs, num_basis] X; // mgcv GAM design matrix
#>   array[n, n_series] int<lower=0> ytimes; // time-ordered matrix (which col in X belongs to each [time, series] observation?)
#>   matrix[9, 18] S1; // mgcv smooth penalty matrix S1
#>   int<lower=0> n_nonmissing; // number of nonmissing observations
#>   array[n_nonmissing] int<lower=0> flat_ys; // flattened nonmissing observations
#>   matrix[n_nonmissing, num_basis] flat_xs; // X values for nonmissing observations
#>   array[n_nonmissing] int<lower=0> obs_ind; // indices of nonmissing observations
#> }
#> parameters {
#>   // raw basis coefficients
#>   vector[num_basis] b_raw;
#>   
#>   // latent trend AR1 terms
#>   vector<lower=-1, upper=1>[n_series] ar1;
#>   
#>   // latent trend variance parameters
#>   vector<lower=0>[n_series] sigma;
#>   
#>   // latent trends
#>   matrix[n, n_series] trend;
#>   
#>   // smoothing parameters
#>   vector<lower=0>[n_sp] lambda;
#> }
#> transformed parameters {
#>   // basis coefficients
#>   vector[num_basis] b;
#>   b[1 : num_basis] = b_raw[1 : num_basis];
#> }
#> model {
#>   // prior for (Intercept)...
#>   b_raw[1] ~ normal(0, 0.25);
#>   
#>   // prior for cov...
#>   b_raw[2] ~ normal(0.2, 0.5);
#>   
#>   // prior for s(season)...
#>   b_raw[3 : 11] ~ multi_normal_prec(zero[3 : 11],
#>                                     S1[1 : 9, 1 : 9] * lambda[1]
#>                                     + S1[1 : 9, 10 : 18] * lambda[2]);
#>   
#>   // priors for AR parameters
#>   ar1 ~ std_normal();
#>   
#>   // priors for smoothing parameters
#>   lambda ~ normal(5, 30);
#>   
#>   // priors for latent trend variance parameters
#>   sigma ~ student_t(3, 0, 2.5);
#>   
#>   // trend estimates
#>   trend[1, 1 : n_series] ~ normal(0, sigma);
#>   for (s in 1 : n_series) {
#>     trend[2 : n, s] ~ normal(ar1[s] * trend[1 : (n - 1), s], sigma[s]);
#>   }
#>   {
#>     // likelihood functions
#>     vector[n_nonmissing] flat_trends;
#>     flat_trends = to_vector(trend)[obs_ind];
#>     flat_ys ~ poisson_log_glm(append_col(flat_xs, flat_trends), 0.0,
#>                               append_row(b, 1.0));
#>   }
#> }
#> generated quantities {
#>   vector[total_obs] eta;
#>   matrix[n, n_series] mus;
#>   vector[n_sp] rho;
#>   vector[n_series] tau;
#>   array[n, n_series] int ypred;
#>   rho = log(lambda);
#>   for (s in 1 : n_series) {
#>     tau[s] = pow(sigma[s], -2.0);
#>   }
#>   
#>   // posterior predictions
#>   eta = X * b;
#>   for (s in 1 : n_series) {
#>     mus[1 : n, s] = eta[ytimes[1 : n, s]] + trend[1 : n, s];
#>     ypred[1 : n, s] = poisson_log_rng(mus[1 : n, s]);
#>   }
#> }
#> 
#> 

# The "class = 'b'" shortcut can be used to put the same prior on all
# 'fixed' effect coefficients (apart from any intercepts)
set.seed(0)
dat <- mgcv::gamSim(1, n = 200, scale = 2)
#> Gu & Wahba 4 term additive model
dat$time <- 1:NROW(dat)
mod <- mvgam(y ~ x0 + x1 + s(x2) + s(x3),
            priors = prior(normal(0, 0.75), class = 'b'),
            data = dat,
            family = gaussian(),
            run_model = FALSE)
code(mod)
#> // Stan model code generated by package mvgam
#> functions {
#>   vector rep_each(vector x, int K) {
#>     int N = rows(x);
#>     vector[N * K] y;
#>     int pos = 1;
#>     for (n in 1 : N) {
#>       for (k in 1 : K) {
#>         y[pos] = x[n];
#>         pos += 1;
#>       }
#>     }
#>     return y;
#>   }
#> }
#> data {
#>   int<lower=0> total_obs; // total number of observations
#>   int<lower=0> n; // number of timepoints per series
#>   int<lower=0> n_sp; // number of smoothing parameters
#>   int<lower=0> n_series; // number of series
#>   int<lower=0> num_basis; // total number of basis coefficients
#>   vector[num_basis] zero; // prior locations for basis coefficients
#>   matrix[total_obs, num_basis] X; // mgcv GAM design matrix
#>   array[n, n_series] int<lower=0> ytimes; // time-ordered matrix (which col in X belongs to each [time, series] observation?)
#>   matrix[9, 18] S1; // mgcv smooth penalty matrix S1
#>   matrix[9, 18] S2; // mgcv smooth penalty matrix S2
#>   int<lower=0> n_nonmissing; // number of nonmissing observations
#>   vector[n_nonmissing] flat_ys; // flattened nonmissing observations
#>   matrix[n_nonmissing, num_basis] flat_xs; // X values for nonmissing observations
#>   array[n_nonmissing] int<lower=0> obs_ind; // indices of nonmissing observations
#> }
#> parameters {
#>   // raw basis coefficients
#>   vector[num_basis] b_raw;
#>   
#>   // gaussian observation error
#>   vector<lower=0>[n_series] sigma_obs;
#>   
#>   // smoothing parameters
#>   vector<lower=0>[n_sp] lambda;
#> }
#> transformed parameters {
#>   // basis coefficients
#>   vector[num_basis] b;
#>   b[1 : num_basis] = b_raw[1 : num_basis];
#> }
#> model {
#>   // prior for (Intercept)...
#>   b_raw[1] ~ student_t(3, 7.4, 3.7);
#>   
#>   // prior for x0...
#>   b_raw[2] ~ normal(0, 0.75);
#>   
#>   // prior for x1...
#>   b_raw[3] ~ normal(0, 0.75);
#>   
#>   // prior for s(x2)...
#>   b_raw[4 : 12] ~ multi_normal_prec(zero[4 : 12],
#>                                     S1[1 : 9, 1 : 9] * lambda[1]
#>                                     + S1[1 : 9, 10 : 18] * lambda[2]);
#>   
#>   // prior for s(x3)...
#>   b_raw[13 : 21] ~ multi_normal_prec(zero[13 : 21],
#>                                      S2[1 : 9, 1 : 9] * lambda[3]
#>                                      + S2[1 : 9, 10 : 18] * lambda[4]);
#>   
#>   // priors for smoothing parameters
#>   lambda ~ normal(5, 30);
#>   
#>   // priors for observation error parameters
#>   sigma_obs ~ student_t(3, 0, 3.7);
#>   {
#>     // likelihood functions
#>     vector[n_nonmissing] flat_sigma_obs;
#>     flat_sigma_obs = rep_each(sigma_obs, n)[obs_ind];
#>     flat_ys ~ normal_id_glm(flat_xs, 0.0, b, flat_sigma_obs);
#>   }
#> }
#> generated quantities {
#>   vector[total_obs] eta;
#>   matrix[n, n_series] sigma_obs_vec;
#>   matrix[n, n_series] mus;
#>   vector[n_sp] rho;
#>   array[n, n_series] real ypred;
#>   rho = log(lambda);
#>   
#>   // posterior predictions
#>   eta = X * b;
#>   for (s in 1 : n_series) {
#>     sigma_obs_vec[1 : n, s] = rep_vector(sigma_obs[s], n);
#>   }
#>   for (s in 1 : n_series) {
#>     mus[1 : n, s] = eta[ytimes[1 : n, s]];
#>     ypred[1 : n, s] = normal_rng(mus[1 : n, s], sigma_obs_vec[1 : n, s]);
#>   }
#> }
#> 
#> 
# }