The general unimodal distribution (GUD) family is essentially a family of two-component mixture distributions. The probability density function (pdf) of a member of GUD family is f(y ∣ w, θ, ξ1, ξ2) = wf1(y ∣ θ, ξ1) + (1 − w)f2(y ∣ θ, ξ2), where w ∈ [0, 1] is the weight parameter, θ ∈ (−∞, +∞) is the mode as a location parameter, ξ1 consists of parameters other than the location parameter in f1(⋅ ∣ θ, ξ1) and ξ2 is defined similarly for f2(⋅ ∣ θ, ξ2). Besides unimodality, all members of the GUD family share three features:
More details of the GUD family can be found in Liu, Q., Huang, X., & Bai, R. (2024).
In this section, we demonstrate how to use the GUD
package to analyze 2003 United States Crime Data as in Section 2 of Liu, Q., Huang, X., & Bai,
R. (2024).
In “The Art and Science of Learning from Data, 5th edition” by Alan Agresti, Christine A. Franklin, and Bernhard Klingenberg, an interesting example about the 2003 United States crime data is presented to demonstrate the influence of outliers in the classic linear regression model. This example is very compelling and partially motivates the construction of the Bayesian modal regression based on the GUD family. This data contains the murder rate, percentage of college education, poverty percentage, and metropolitan rate for the 50 states in the United States and the District of Columbia (D.C.) from 2003. The murder rate is defined as the annual number of murders per 100, 000 people in the population. The poverty percentage is the percentage of residents with income below the poverty level, and the metropolitan rate is defined as the percentage of population living in the metropolitan area. In the exploratory data analysis, we present the conditional scatter plot matrices below.
# load data crime from the GUD package
df1 <- crime
# the conditional scatter plot matrices of U.S. crime data
if (require(lattice)) {
lattice::splom(~df1[c(6,4,9,3)],
main = NULL,
panel = function(x,y,...) {
panel.splom(x,y,...)
})
}
#> Loading required package: lattice
In the conditional scatter plot matrices, we notice an outlier, Washington, D.C., which stands out and does not follow the common pattern of other states.
Next, we demonstrate how to fit the Bayesian modal regression model based on the TPSC distribution to the 2003 United States crime data.
TPSC_model <- modal_regression(`murder rate` ~ college + poverty + metropolitan,
data = df1,
model = "TPSC",
chains = 2,
iter = 2000)
#>
#> SAMPLING FOR MODEL 'TPSC' NOW (CHAIN 1).
#> Chain 1:
#> Chain 1: Gradient evaluation took 4.6e-05 seconds
#> Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0.46 seconds.
#> Chain 1: Adjust your expectations accordingly!
#> Chain 1:
#> Chain 1:
#> Chain 1: Iteration: 1 / 2000 [ 0%] (Warmup)
#> Chain 1: Iteration: 200 / 2000 [ 10%] (Warmup)
#> Chain 1: Iteration: 400 / 2000 [ 20%] (Warmup)
#> Chain 1: Iteration: 600 / 2000 [ 30%] (Warmup)
#> Chain 1: Iteration: 800 / 2000 [ 40%] (Warmup)
#> Chain 1: Iteration: 1000 / 2000 [ 50%] (Warmup)
#> Chain 1: Iteration: 1001 / 2000 [ 50%] (Sampling)
#> Chain 1: Iteration: 1200 / 2000 [ 60%] (Sampling)
#> Chain 1: Iteration: 1400 / 2000 [ 70%] (Sampling)
#> Chain 1: Iteration: 1600 / 2000 [ 80%] (Sampling)
#> Chain 1: Iteration: 1800 / 2000 [ 90%] (Sampling)
#> Chain 1: Iteration: 2000 / 2000 [100%] (Sampling)
#> Chain 1:
#> Chain 1: Elapsed Time: 3.056 seconds (Warm-up)
#> Chain 1: 1.772 seconds (Sampling)
#> Chain 1: 4.828 seconds (Total)
#> Chain 1:
#>
#> SAMPLING FOR MODEL 'TPSC' NOW (CHAIN 2).
#> Chain 2:
#> Chain 2: Gradient evaluation took 3.4e-05 seconds
#> Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 0.34 seconds.
#> Chain 2: Adjust your expectations accordingly!
#> Chain 2:
#> Chain 2:
#> Chain 2: Iteration: 1 / 2000 [ 0%] (Warmup)
#> Chain 2: Iteration: 200 / 2000 [ 10%] (Warmup)
#> Chain 2: Iteration: 400 / 2000 [ 20%] (Warmup)
#> Chain 2: Iteration: 600 / 2000 [ 30%] (Warmup)
#> Chain 2: Iteration: 800 / 2000 [ 40%] (Warmup)
#> Chain 2: Iteration: 1000 / 2000 [ 50%] (Warmup)
#> Chain 2: Iteration: 1001 / 2000 [ 50%] (Sampling)
#> Chain 2: Iteration: 1200 / 2000 [ 60%] (Sampling)
#> Chain 2: Iteration: 1400 / 2000 [ 70%] (Sampling)
#> Chain 2: Iteration: 1600 / 2000 [ 80%] (Sampling)
#> Chain 2: Iteration: 1800 / 2000 [ 90%] (Sampling)
#> Chain 2: Iteration: 2000 / 2000 [100%] (Sampling)
#> Chain 2:
#> Chain 2: Elapsed Time: 2.735 seconds (Warm-up)
#> Chain 2: 1.814 seconds (Sampling)
#> Chain 2: 4.549 seconds (Total)
#> Chain 2:
One can summarize the Bayesian analysis using the summary function.
print(summary(TPSC_model), n = 7)
#> # A tibble: 113 × 10
#> variable mean median sd mad q5 q95 rhat ess_bulk
#> <chr> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
#> 1 w 0.283 0.285 0.112 0.119 0.0964 0.469 1.01 590.
#> 2 delta 1.88 1.76 0.622 0.544 1.08 3.06 1.00 1028.
#> 3 sigma 1.19 1.17 0.244 0.239 0.824 1.63 1.00 844.
#> 4 (Intercept) 1.23 1.29 2.67 2.59 -3.23 5.29 1.00 736.
#> 5 college -0.200 -0.205 0.0806 0.0797 -0.327 -0.0591 1.00 772.
#> 6 poverty 0.246 0.254 0.137 0.134 0.00875 0.457 1.00 859.
#> 7 metropolitan 0.0628 0.0615 0.0137 0.0132 0.0426 0.0873 1.01 856.
#> # ℹ 106 more rows
#> # ℹ 1 more variable: ess_tail <dbl>
One can present the traceplot of the MCMC chain using the
bayesplot::mcmc_trace
function.
if (require(bayesplot)) {
bayesplot::mcmc_trace(TPSC_model, pars = c("(Intercept)",
"college",
"poverty",
"metropolitan"))
}
#> Loading required package: bayesplot
#> This is bayesplot version 1.11.1
#> - Online documentation and vignettes at mc-stan.org/bayesplot
#> - bayesplot theme set to bayesplot::theme_default()
#> * Does _not_ affect other ggplot2 plots
#> * See ?bayesplot_theme_set for details on theme setting
The summary of posterior predictive distribution can be assessed
using the following command. Here ystar[1]
represents the
posterior prediction of the first observation in the dataset, and so
on.
summary(posterior::subset_draws(TPSC_model, variable = "ystar"))
#> # A tibble: 51 × 10
#> variable mean median sd mad q5 q95 rhat ess_bulk ess_tail
#> <chr> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
#> 1 ystar[1] 8.31 6.25 59.5 2.01 2.92 13.7 1.00 2019. 1926.
#> 2 ystar[2] 2.32 1.37 7.90 1.95 -1.60 8.25 1.00 1969. 2045.
#> 3 ystar[3] 19.5 6.15 533. 1.90 3.19 13.4 0.999 1975. 1852.
#> 4 ystar[4] 6.08 5.55 27.3 2.10 2.24 12.8 1.00 1542. 1807.
#> 5 ystar[5] 5.63 6.16 64.0 1.96 3.17 13.4 1.00 1734. 1771.
#> 6 ystar[6] 3.66 2.66 7.34 2.06 -0.355 9.64 1.00 1616. 1891.
#> 7 ystar[7] 5.05 3.65 16.6 1.91 0.764 11.7 1.00 1782. 1814.
#> 8 ystar[8] 5.75 4.90 5.01 1.87 2.17 11.9 1.00 1838. 2050.
#> 9 ystar[9] 6.20 5.18 6.69 2.57 1.26 13.5 1.00 1315. 1663.
#> 10 ystar[10] 7.63 6.47 8.05 1.94 3.64 13.9 1.00 1912. 2006.
#> # ℹ 41 more rows
Further comparisons between mean, median, and modal regression can be found in Section 2 of Liu, Q., Huang, X., & Bai, R. (2024) and Section 6 of Liu, Q., Huang, X., & Zhou, H. (2024).