TransWikia.com

How to deal with large combinations of factors

Cross Validated Asked by mat on January 14, 2021

I am faced with a new problem that I cannot really tackle with the way I usually do my analyses and would appreciate any input you may have.

The research goal is to identify the factors and combination of factors that maximize the intention to purchase a product X. We manipulated the presentation of X across 10+ variables, each with 2-3 levels. For instance: the color of the product (blue or green), the size (small, medium, large), etc. We generated all the combinations possible (say 500) and presented 5 randomly picked presentations of X to each participants and asked them to report their intention to purchase the product for each of them on a likert scale from 1 (not at all) to 5 (very much). Given the number of participants, I can expect, on average, ~30 responses for each unique presentations of X.

I am used to experimental settings with just a few factors and would usually rely on OLS or LMM regressions to asses the main, simple, and interaction effects. But as you can see in this case I probably don’t have enough power to test very high order interactions and at the same time it would be a complete mess to try to decompose hundreds of interactions and make sense of it. I thought another approach would be to simply compute a massive repeated measure anova with post-hoc test to correct for multiple comparisons (e.g., Bonferroni). But again, I am not confident this would be the best approach given high number of unique presentations of X.

Do you know any statistical technique that would be fitted for this kind of problem? I would also very much appreciate exploratory analyses or visual ways to represent the data and gain insights.


Edit: There are in fact way more than 500 unique versions of X.

Add your own answers!

Ask a Question

Get help from others!

© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP