class: center, middle, inverse, title-slide # Understanding the Coefficient of Determination ### STAT 021 with Prof Suzy ### Swarthmore College --- <style type="text/css"> pre { background: #FFBB33; max-width: 100%; overflow-x: scroll; } .scroll-output { height: 70%; overflow-y: scroll; } .scroll-small { height: 50%; overflow-y: scroll; } .red{color: #ce151e;} .green{color: #26b421;} .blue{color: #426EF0;} </style> ## What does the rest of the R output mean? ``` ## ## Call: ## lm(formula = prop_uninsured ~ spending_capita, data = hc_employer_2013) ## ## Residuals: ## Min 1Q Median 3Q Max ## -0.086642 -0.016766 0.002575 0.013199 0.073975 ## ## Coefficients: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) 2.642e-01 2.744e-02 9.627 7.0e-13 *** ## spending_capita -1.759e-06 3.364e-07 -5.230 3.5e-06 *** ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## Residual standard error: 0.02946 on 49 degrees of freedom ## Multiple R-squared: 0.3582, Adjusted R-squared: 0.3451 ## F-statistic: 27.35 on 1 and 49 DF, p-value: 3.503e-06 ``` --- ## What does the rest of the R output mean? ### The <a href="https://en.wikipedia.org/wiki/Coefficient_of_determination">coefficient of determination</a> **Interpretation:** R squared is a *statistic* that represents a proportion. Specifically, it's the proportion of the variability/dispersion of our observations of `\(Y\)`, i.e. `\((y_1,\dots,y_n)\)`, that our linear model can account for. **Note:** We will use the .blue[adjusted R squared] value whenever we have more than one predictor variable (in multiple linear regression - MLR) but you can use either the adjusted or multiple R squared value for SLR. --- ## Sums of squares As with ANOVA models, we can decompose the variability in our observed response variable into two parts: one due to the linear model that depends on the predictor `\((\hat{\beta}_0 + \hat{\beta}_1x)\)` and one that is unexplained and due to the random measurement error `\((\epsilon)\)`. The sum of these two components is the total observed variability in the response varible. **Regression sum of squares:** (SSreg) `$$\sum_{i=1}^{n}\left( \hat{y}_i - \bar{y} \right)^2$$` **Residual sum of squares/sum square errors:** (SSres or SSE) `$$\sum_{i=1}^{n}\left( y_i - \hat{y}_i \right)^2 = \sum_{i=1}^n e_i^2$$` **Total sum of squares:** (SStot) `$$\sum_{i=1}^{n}\left( y_i - \bar{y} \right)^2$$` --- ## Health care example ### Sums of squares - measurements of dispersion ```r anova(SLR_hc) ``` ``` ## Analysis of Variance Table ## ## Response: prop_uninsured ## Df Sum Sq Mean Sq F value Pr(>F) ## spending_capita 1 0.023730 0.0237300 27.35 3.503e-06 *** ## Residuals 49 0.042514 0.0008676 ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ``` --- ## What does the rest of the R output mean? ### The <a href="https://en.wikipedia.org/wiki/Coefficient_of_determination">coefficient of determination</a> In SLR, we only have one predictor variable and it is also quantitative. In this special scenario, we have covered how the coefficient of determination gives us the same information as calculating the correlation between the predictor and response variable. In more complicated models, when we have more than one predictor variable and/or categorical predictor variables, looking at pairwise correlations is no longer as informative as the coefficient of determination. In these instances, R-squared is still providing us with information about the strength of the linear relationship between the predictor(s) and the response. To understand why this is the case, let's look at the definition of R-squared: `$$R^2 = 1 - \frac{SSres}{SStot}$$` -- Using the fact that `\(SStot=SSreg+SSres\)` we see that we could also write `$$R^2 = \frac{SSreg}{SStot}.$$` --- ## Health care example ### Coefficient of determination .scroll-output[ ```r summary(SLR_hc) ``` ``` ## ## Call: ## lm(formula = prop_uninsured ~ spending_capita, data = hc_employer_2013) ## ## Residuals: ## Min 1Q Median 3Q Max ## -0.086642 -0.016766 0.002575 0.013199 0.073975 ## ## Coefficients: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) 2.642e-01 2.744e-02 9.627 7.0e-13 *** ## spending_capita -1.759e-06 3.364e-07 -5.230 3.5e-06 *** ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## Residual standard error: 0.02946 on 49 degrees of freedom ## Multiple R-squared: 0.3582, Adjusted R-squared: 0.3451 ## F-statistic: 27.35 on 1 and 49 DF, p-value: 3.503e-06 ``` ]