PhD Candidate, Department of Economics

Contact Information


Department of Economics
Northwestern University
2211 Campus Drive
Evanston, IL 60208

 

Education

Ph.D., Economics, Northwestern University, 2024 (expected)
M.A., Economics, Seoul National University, 2017
B.S., Industrial Engineering, 2014

 

Primary Fields of Specialization

Econometrics

 

Curriculum Vitae

 

Job Market Paper

“Testing Sign Agreement”

This article considers the problem of testing sign agreement of a finite number of means. Examples of this problem include detecting heterogeneous treatment effects with opposite signs, refuting the assumptions of local average treatment effect, and testing political affiliation alignment across multiple groups. For the null hypothesis that the means are all non-negative or all non-positive, I propose two novel statistical tests: the Least Favorable and the Hybrid tests. The main result is that both tests control their sizes uniformly over a large class of distributions for the observed data in large samples. Compared to popular multiple testing procedures, the Least Favorable test can exhibit superior power, a fact that holds true invariably when the test concerns two means. Both tests are the first tests of sign agreement to accommodate arbitrary dependence among estimators for any finite number of means. Results from simulation studies indicate that, with finite samples, the rejection probabilities of both tests reach the nominal level under the null hypothesis. These simulations further suggest that, overall, the Hybrid test exhibits higher power than the Least Favorable test when there are more than three means; this relationship reverses when considering only two means. I demonstrate the utility of both tests in an application inspired by Angelucci et al. (2015), in which I study the impacts of microloans on various groups and outcomes.

 

Publications

Whenever a “good” forecasting model is obtained by a thorough specification search, there remains a risk that its seeming performance results from mere coincidence, rather than from true predictive ability. To address this issue, various tests have been developed to compare the predictive abilities of different forecasting models. One such test, the hybrid test of superior predictive ability, proposed by Song (2012), concerns the null hypothesis that no alternative model within a finite set outperforms a benchmark model in terms of predictive ability. This article analyzes the theoretical properties of the hybrid test for superior predictive ability. It demonstrates with a simple example that the test may not be pointwise asymptotically valid at commonly used significance levels and may lead to rejection rates over 11% when the significance level is 5%. Generalizing this observation, it further provides a formal result that pointwise asymptotic invalidity of the hybrid test persists in settings under reasonable conditions. The Monte Carlo simulations support the theoretical findings.

This paper provides a user’s guide to the general theory of approximate randomization tests developed in Canay, Romano, and Shaikh (2017a. “Randomization Tests under an Approximate Symmetry Assumption.”” Econometrica 85 (3): 1013–30) when specialized to linear regressions with clustered data. An important feature of the methodology is that it applies to settings in which the number of clusters is small — even as small as five. We provide a step-by-step algorithmic description of how to implement the test and construct confidence intervals for the parameter of interest. In doing so, we additionally present three novel results concerning the methodology: we show that the method admits an equivalent implementation based on weighted scores; we show the test and confidence intervals are invariant to whether the test statistic is studentized or not; and we prove convexity of the confidence intervals for scalar parameters. We also articulate the main requirements underlying the test, emphasizing in particular common pitfalls that researchers may encounter.

 

References