# Justification for and optimality of $R^2_{adj.}$ as a model selection criterion

Cross Validated Asked on January 7, 2022

In a recent thread, use of adjusted $$R^2$$ ($$R^2_{adj.}$$) is mentioned in the context of model selection, e.g.

The adjustment was invented as a solution to problems caused by variable selection

Question: Is there any justification for using $$R^2_{adj.}$$ for model selection? That is, does $$R^2_{adj.}$$ have any optimality properties in the context of model selection?

For example, AIC is an efficient criterion and BIC is a consistent one, but $$R^2$$ does not coincide with any of them and so makes me wonder if it can be optimal in any other sense.

I would propose six optimality properties.

1. Overfit Mitigation
2. Simplicity and Parsimony
3. General Shared Understanding
4. Semi-Efficient Factor Identification
5. Robustness to Sample Size Change
6. Explanatory Utility

Overfit Mitigation

What kind of model is overfit? In part, this depends on the model's use case. Suppose we are using a model to test whether a hypothesized factor-level relationship exists. In that case a model which tends to allow spurious relations is overfit.

"The use of an adjusted R2...is an attempt to account for the phenomenon of the R2 automatically and spuriously increasing when extra explanatory variables are added to the model." Wikipedia.

Simplicity and Parsimony

Parsimony is valued on normative and economic rationale. Occam's Razor is an example of a norm, and depending on what we mean by "justification," it might pass or fail.

The economic rationale for simplicity and parsimony are harder to dismiss:

1. Complex models with many factors are expensive to gather data for.
2. Complex models can be more expensive to execute.
3. Complex models are hard to communicate and think through. Business and legal risks can result from this, as well as plain time spent communicating from one person to another.

Given two models with equal explanatory power (R2), then, AR2 selects for the simpler and more parsimonious model.

General Shared Understanding

Justification involves shared understanding. Consider a peer-review situation. If the reviewer and the reviewed lack a shared understanding of model selection, questions or rejections may occur.

R2 is an elementary statistical concept and those only familiar with elementary statistics still generally understand that R2 is gameable and AR2 is preferred to R2 for the above reasons.

Sure, there may be better choices compared to AR2 such as AIC and BIC, but if the reviewer is unfamiliar with these then their use may not succeed as a justification. What's worse, the reviewer may have a misunderstanding themselves and required AIC or BIC when they aren't required - that itself is unjustified.

My limited understanding indicates that AIC is now considered rather arbitrary by many - specifically the 2s in the formula. WAIC, DIC, and LOO-CV have been suggested as preferred, see here.

I hope by "justified" we don't mean "no better parameter exists" because it seems to me that some better parameter might always exist unbeknownst to us, therefore this style of justification always fails. Instead "justified" ought to mean "satisfies the requirement at hand" in my view.

Semi-Efficient Factor Identification

Caveat: I made up this term and I could be using it wrong :)

Basically, if we are interested in identifying true factor relations, we should expect p < 0.5, ie P(B) > P'(B). AR2 maximization satisfies this as adding a factor with p >= 0.5 will reduce AR2. Now this isn't an exact match because I think AR2 generally penalizes p > 0.35-ish.

It's true AIC penalizes more in general but I'm not sure that's a good thing if the goal is to identify all observed features that have an identifiable relation, say at least directionally, in a given data set.

Robustness to Sample Size Change

In the comments of this post, Scortchi - Reinstate Monica notes that it "makes no sense to compare likelihoods (or therefore AICs) of models fitted on different nos observations." In contrast, r-squared and adjusted r-squared are absolute measures that can be compared with a change in the number of samples.

This might be useful in the case of a questionnaire that includes some optional questions and partial responses. It's of course important to be mindful of issues like response bias in such cases.

Explanatory Utility

Here, we are told that "R2 and AIC are answering two different questions...R2 is saying something to the effect of how well your model explains the observed data...AIC, on the other hand, is trying to explain how well the model will predict on new data."

So if the use case is non-predictive, such as in the case of theory-driven, factor-level hypothesis testing, AIC may be considered inappropriate.

Answered by John Vandivier on January 7, 2022

I don't know if $$R^2_{text{adj.}}$$ have any optimal properties for model selection, but it is surely taught (or at least mentioned) in that context. One reason might be because most students have met $$R^2$$ early on, so there is then something to build on.

One example is the following exam paper from University of Oslo (see problem 1.) The text used in that course, Regression Methods in Biostatistics Linear, Logistic, Survival, and Repeated Measures Models Second edition by Eric Vittinghoff, David V. Glidden, Stephen C. Shiboski and Charles E. McCulloch mentions $$R^2_{text{adj.}}$$ early on in their chapter 10 on variable selection (as penalizing less than AIC, for example) but neither it nor AIC is mentioned in their summary/recommendations 10.5.

So it is maybe mostly used didactically, as an introduction to the problems of model selection, and not because of any optimality properties.

Answered by kjetil b halvorsen on January 7, 2022

1. If you add more variables, even totally insignificant variable, R2 can only go up. this is not the case with adjusted R2. You can try running multiple regression and then add random variable and see what happened to R2 and what happened to the adjusted R2.

Answered by Oren Ben-Harim on January 7, 2022

## Related Questions

### Choice between static and dynamic panel regression

2  Asked on December 21, 2020 by uzbekistan

### Are there realistic/relevant use-cases for one way ANOVA?

2  Asked on December 20, 2020 by david-ernst

### Help with name of a regression

0  Asked on December 19, 2020 by user276835

### Forecasts combination via weights based on normal distribution

0  Asked on December 18, 2020 by oumayma-bounouh

### May Skilling’s Nested Sampling Estimate parameters in hierarchical model?

0  Asked on December 18, 2020 by germania

### How to test the influence of an external factor?

0  Asked on December 17, 2020 by pavel

### Nonparemetric tests: how to support the null hypothesis you claim to be testing

1  Asked on December 17, 2020

### Hazards in AFT with Weibull distribution

1  Asked on December 17, 2020 by user11130854

### Seeking authoritative references on weighted ANOVA

0  Asked on December 16, 2020 by whuber

### Lasso Regression – Finding multiple candidate models

1  Asked on December 16, 2020 by jlearner

### Are conditional mean in an AR(1)-GARCH(1,1) equal for different GARCH(1,1) processes of the same data?

1  Asked on December 16, 2020 by ber08

### Completly Randomized Trials versus Incomplete Cubic Lattice

0  Asked on December 16, 2020 by noumenal

### Unusual (to me) Phrasing of Power Analysis Objective; Interpretation Requested

1  Asked on December 15, 2020 by emmettcc

### Question about the right inverse method in a GLM of order 2

1  Asked on December 15, 2020 by suzee

### Different formulations of within-class scatter matrix

0  Asked on December 15, 2020

### Correct algorithm for string classification

1  Asked on December 14, 2020 by bandit_king28

### What is the best way to remember the difference between sensitivity, specificity, precision, accuracy, and recall?

9  Asked on December 13, 2020 by jessica

### Quantifying the uncertainty of aggregated model predictions

1  Asked on December 13, 2020 by kh_one

### Evaluate Bayesian SEM goodness of fit blavaan

1  Asked on December 13, 2020 by l-sicilis