סמינר משותף בניהול טכנולוגיה ומידע ובשיווק
Learning about peer effects from many experiments: Regularized instrumental variable methods for massive meta-analysis
Prof. Dean Eckles, MIT Sloan School of Management,
Abstract
The widespread adoption of randomized experiments (i.e. A/B tests) in the Internet industry means that there are often numerous well-powered experiments on a given product. Individual experiments are often simple "bake-off" evaluations of a new intervention: They allow us to estimate effects of that particular intervention on outcomes of interest, but they are often not informative about the mechanisms for these effects or what other inventions might do. We consider what else we can learn from a large set of experiments. In particular, we use many experiments to learn about the effects of the various endogenous variables (or mechanisms) via which the experiments affect outcomes. This involves treating the experiments as instrumental variables, and so this setting is similar to, but somewhat different from, "many instrument" settings in econometrics and biostatistics. Motivated by the distribution of experiment first-stage effects, we present and evaluate regularization methods for improving on standard IV estimators. We illustrate these methods with an application to estimating peer effects in content production in online social networks.