Cross Validated Asked on January 7, 2022

In a recent thread, use of adjusted $R^2$ ($R^2_{adj.}$) is mentioned in the context of model selection, e.g.

The adjustment was invented as a solution to problems caused by variable selection

**Question:** Is there any justification for using $R^2_{adj.}$ for model selection? That is, does $R^2_{adj.}$ have any optimality properties in the context of model selection?

For example, AIC is an efficient criterion and BIC is a consistent one, but $R^2$ does not coincide with any of them and so makes me wonder if it can be optimal in any other sense.

I would propose six optimality properties.

- Overfit Mitigation
- Simplicity and Parsimony
- General Shared Understanding
- Semi-Efficient Factor Identification
- Robustness to Sample Size Change
- Explanatory Utility

**Overfit Mitigation**

What kind of model is overfit? In part, this depends on the model's use case. Suppose we are using a model to test whether a hypothesized factor-level relationship exists. In that case a model which tends to allow spurious relations is overfit.

"The use of an adjusted R2...is an attempt to account for the phenomenon of the R2 automatically and spuriously increasing when extra explanatory variables are added to the model." Wikipedia.

**Simplicity and Parsimony**

Parsimony is valued on normative and economic rationale. Occam's Razor is an example of a norm, and depending on what we mean by "justification," it might pass or fail.

The economic rationale for simplicity and parsimony are harder to dismiss:

- Complex models with many factors are expensive to gather data for.
- Complex models can be more expensive to execute.
- Complex models are hard to communicate and think through. Business and legal risks can result from this, as well as plain time spent communicating from one person to another.

Given two models with equal explanatory power (R2), then, AR2 selects for the simpler and more parsimonious model.

**General Shared Understanding**

Justification involves shared understanding. Consider a peer-review situation. If the reviewer and the reviewed lack a shared understanding of model selection, questions or rejections may occur.

R2 is an elementary statistical concept and those only familiar with elementary statistics still generally understand that R2 is gameable and AR2 is preferred to R2 for the above reasons.

Sure, there may be better choices compared to AR2 such as AIC and BIC, but if the reviewer is unfamiliar with these then their use may not succeed as a justification. What's worse, the reviewer may have a misunderstanding themselves and required AIC or BIC when they aren't required - that itself is unjustified.

My limited understanding indicates that AIC is now considered rather arbitrary by many - specifically the 2s in the formula. WAIC, DIC, and LOO-CV have been suggested as preferred, see here.

I hope by "justified" we don't mean "no better parameter exists" because it seems to me that some better parameter might always exist unbeknownst to us, therefore this style of justification always fails. Instead "justified" ought to mean "satisfies the requirement at hand" in my view.

**Semi-Efficient Factor Identification**

Caveat: I made up this term and I could be using it wrong :)

Basically, if we are interested in identifying true factor relations, we should expect p < 0.5, ie P(B) > P'(B). AR2 maximization satisfies this as adding a factor with p >= 0.5 will reduce AR2. Now this isn't an exact match because I think AR2 generally penalizes p > 0.35-ish.

It's true AIC penalizes more in general but I'm not sure that's a good thing if the goal is to identify all observed features that have an identifiable relation, say at least directionally, in a given data set.

**Robustness to Sample Size Change**

In the comments of this post, Scortchi - Reinstate Monica notes that it "makes no sense to compare likelihoods (or therefore AICs) of models fitted on different nos observations." In contrast, r-squared and adjusted r-squared are absolute measures that can be compared with a change in the number of samples.

This might be useful in the case of a questionnaire that includes some optional questions and partial responses. It's of course important to be mindful of issues like response bias in such cases.

**Explanatory Utility**

Here, we are told that "R2 and AIC are answering two different questions...R2 is saying something to the effect of how well your model explains the observed data...AIC, on the other hand, is trying to explain how well the model will predict on new data."

So if the use case is non-predictive, such as in the case of theory-driven, factor-level hypothesis testing, AIC may be considered inappropriate.

Answered by John Vandivier on January 7, 2022

I don't know if $R^2_{text{adj.}}$ have any optimal properties for model selection, but it is surely taught (or at least mentioned) in that context. One reason might be because most students have met $R^2$ early on, so there is then something to build on.

One example is the following exam paper from University of Oslo (see problem 1.) The text used in that course, *Regression Methods
in Biostatistics
Linear, Logistic, Survival, and Repeated
Measures Models
Second edition* by Eric Vittinghoff, David V. Glidden, Stephen C. Shiboski and Charles E. McCulloch mentions $R^2_{text{adj.}}$ early on in their chapter 10 on variable selection (as penalizing less than AIC, for example) but neither it nor AIC is mentioned in their summary/recommendations 10.5.

So it is maybe mostly used didactically, as an introduction to the problems of model selection, and not because of any optimality properties.

Answered by kjetil b halvorsen on January 7, 2022

Answer for part1:

- If you add more variables, even totally insignificant variable, R
^{2}can only go up. this is not the case with adjusted R^{2}. You can try running multiple regression and then add random variable and see what happened to R^{2}and what happened to the adjusted R^{2}.

Answered by Oren Ben-Harim on January 7, 2022

1 Asked on January 5, 2022 by aarsmith

logistic mixed model prediction regression regression coefficients

7 Asked on January 3, 2022 by user2806363

2 Asked on January 3, 2022

autoencoders gan graphical model machine learning neural networks

1 Asked on January 3, 2022

1 Asked on January 3, 2022

2 Asked on January 3, 2022 by iplexipen

0 Asked on January 3, 2022 by khemedi

artificial intelligence machine learning neural networks precision recall

0 Asked on January 3, 2022

data visualization machine learning matplotlib python variance

0 Asked on January 3, 2022 by indula

0 Asked on January 3, 2022 by e-wade

lme4 nlme mixed model multilevel analysis r random effects model

2 Asked on January 3, 2022 by fishchick

0 Asked on January 3, 2022

hypothesis testing neyman pearson lemma statistical significance

0 Asked on January 3, 2022 by gannawag

1 Asked on January 3, 2022

0 Asked on January 3, 2022 by ofow

approximation machine learning neural networks optimization polynomial

1 Asked on January 3, 2022 by p-lrc

1 Asked on January 1, 2022

2 Asked on January 1, 2022 by tomek-tarczynski

Get help from others!

Recent Questions

- How Do I Get The Ifruit App Off Of Gta 5 / Grand Theft Auto 5
- Iv’e designed a space elevator using a series of lasers. do you know anybody i could submit the designs too that could manufacture the concept and put it to use
- Need help finding a book. Female OP protagonist, magic
- Why is the WWF pending games (“Your turn”) area replaced w/ a column of “Bonus & Reward”gift boxes?
- Does Google Analytics track 404 page responses as valid page views?

Recent Answers

- Jon Church on Why fry rice before boiling?
- Peter Machado on Why fry rice before boiling?
- Joshua Engel on Why fry rice before boiling?
- haakon.io on Why fry rice before boiling?
- Lex on Does Google Analytics track 404 page responses as valid page views?

© 2023 AnswerBun.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP