In addition to NUTS sampling, you can fit the model using the BFGS algorithm. This algorithm tries to maximize the mode of the posterior; it runs much faster than NUTS, but won't return any confidence intervals. On, BFGS is used to produce estimates for counties, and as a fallback when NUTS fails to produce a timely and/or converged fit for state data. Runtimes scale linearly to the value of tries/cores, but are generally in the 30s-10min range. The same underlying model is used.

# S3 method for covidestim
  cores = parallel::detectCores(),
  tries = 10,
  iter = 2000,
  timeout = 60,



A valid covidestim configuration


A number. How many cores to use to execute runs. Multicore execution is less important for runOptimizer than it is for run.


The number of times to run the BFGS algorithm.


Passed to optimizing.


How long to let each run go for before killing it, in seconds.


Arguments forwarded to optimizing.


An S3 object of class covidestim_result


The BFGS algorithm is run tries times using tries different seeds derived from cc$seed. Once all runs complete, the run with the highest value of the log-posterior is selected and returned. BFGS occasionally gets stuck; these runs are flagged and discarded before the ranking begins.

A second method for fitting the model, using the NUTS algorithm, is available as run. It provides CI's, but is significantly slower.

run()NUTSYes30m-hoursAlways, potentially with warnings, of which "treedepth" and "divergent transitions" are the most serious
runOptimizer()BFGSNo~1-3minPotentially with nonzero exit status (lack of convergence), or timeout (rare, gracefully handled internally)

See also


# Note that this configuration is improper as it uses New York City
# case/death data, but uses Manhattan's FIPS code ('36061') and population size.
# (for demonstration purposes only!)
cfg <- covidestim(ndays = 120, seed = 42, region = '36061', pop_size = 1.63e6) +
  input_cases(example_nyc_data('cases')) +

if (FALSE) {
result <- runOptimizer(cfg, cores = 2)