# Textbook error 101 -- A low p-value for the full model does not mean that the model is a good predictor of the response

On page 606, of Lock et al “Statistics: Unlocking the Power of Data”, the authors state in item D “The p-value from the ANOVA table is 0.000 so the model as a whole is effective at predicting grade point average.” Ah no.

```
library(data.table)
library(mvtnorm)
rho <- 0.5
n <- 10^5
Sigma <- diag(2)
Sigma[1,2] <- Sigma[2,1] <- rho
X <- rmvnorm(n, mean=c(0,0), sigma=Sigma)
colnames(X) <- c("X1", "X2")
beta <- c(0.01, -0.02)
y <- X%*%beta + rnorm(n)
fit <- lm(y ~ X)
summary(fit)
```

```
##
## Call:
## lm(formula = y ~ X)
##
## Residuals:
## Min 1Q Median 3Q Max
## -4.0371 -0.6716 -0.0016 0.6723 4.2041
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) -0.004243 0.003155 -1.345 0.179
## XX1 0.015845 0.003648 4.344 1.40e-05 ***
## XX2 -0.026376 0.003641 -7.245 4.36e-13 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 0.9975 on 99997 degrees of freedom
## Multiple R-squared: 0.0005319, Adjusted R-squared: 0.0005119
## F-statistic: 26.61 on 2 and 99997 DF, p-value: 2.805e-12
```

A p-value is not a measure of the predictive capacity of a model because the p-value is a function of 1) signal, 2) noise (unmodeled error), and 3) sample size while predictive capacity is a function of the signal:noise ratio. If the signal:noise ratio is tiny, the predictive capacity is small but the p-value can be tiny if the sample size is large.