Working Papers

Valuing Pharmaceutical Drug Innovations (joint with Gaurab Aryal, Federico Ciliberto, and Katya Khmelnitskaya) (SSRN Working Paper)

Submitted (new draft 4/12/2024)

Abstract: We propose a methodology to estimate the market value of pharmaceutical drugs. Our approach combines an event study with a model of discounted cash flows and uses stock market responses to drug development announcements to infer the values. We estimate that, on average, a successful drug is valued at $1.62 billion, and its value at the discovery stage is $64.3 million, with substantial heterogeneity across major diseases. Leveraging these estimates, we also determine the average drug development costs at various stages. Furthermore, we explore applying our estimates to design policies that support drug development through drug buyouts and cost-sharing agreements.

Zoomers and Boomers: Asset Prices and Intergenerational Inequality (joint with Roger E. A. Farmer) (NBER Working Paper) (SSRN Working Paper)

Submitted

Abstract: We construct a perpetual youth DSGE model with aggregate uncertainty in which there are dynamically complete markets and agents have Epstein-Zin preferences. We prove that, when endowments have a realistic hump-shaped age-profile, our model has three steady-state equilibria. One of these equilibria is dynamically inefficient and displays real price indeterminacy. We estimate the parameters of our model and we find that a fourth-order approximation around the indeterminate steady-state provides the best fit to U.S. data. Our work interprets the large and persistent generational inequality that has been observed in western economies over the past century as the result of uninsurable income shocks to birth cohorts.

 

Publications

Learning About the Long Run (joint with Emi Nakamura and Jón Steinsson) (pdf) (NBER Working Paper)

Forthcoming at the Journal of Political Economy

Abstract: Forecasts of professional forecasters are anomalous: they are biased, forecast errors are autocorrelated, and forecast revisions predict forecast errors. Sticky or noisy information models seem like unlikely explanations for these anomalies: professional forecasters pay attention constantly and have precise knowledge of the data in question. We propose that these anomalies arise because professional forecasters don’t know the model that generates the data. We show that Bayesian agents learning about hard-to-learn features of the data generating process (low frequency behavior) can generate all the prominent anomalies emphasized in the literature. We show this for two applications: professional forecasts of nominal interest rates for the sample period 1980-2019 and CBO forecasts of GDP growth for the sample period 1976-2019. Our learning model for interest rates also provides an explanation for deviations from the expectations hypothesis of the term structure that does not rely on time-variation in risk premia.

Pockets of Predictability (joint with Lawrence Schmidt and Allan Timmermann) (pdf) (SSRN Working Paper)

Journal of Finance, 2023

Abstract: For many benchmark predictor variables, short-horizon return predictability in the U.S. stock market is local in time as short periods with significant predictability (‘pockets’) are interspersed with long periods with little or no evidence of return predictability. We document this result empirically using a flexible time-varying parameter model which estimates predictive coefficients as a nonparametric function of time and explore possible explanations of this finding, including time-varying risk-premia for which we only find limited support. Conversely, pockets of return predictability are consistent with a sticky expectations model in which investors only slowly update their beliefs about a persistent component in the cash flow process.

Note: A minor coding error impacted some of the results using the original method in the paper. In this note, we show that a simple adjustment to the estimation procedure restores the key results of the published paper.

The Discretization Filter: A Simple Way to Estimate Nonlinear State Space Models (pdf) (SSRN Working Paper)

Quantitative Economics, 2021

Abstract: Existing methods for estimating nonlinear dynamic models are either too computationally complex to be of practical use, or rely on local approximations which often fail adequately to capture the nonlinear features of interest. I develop a new method, the discretization filter, for approximating the likelihood of nonlinear, non-Gaussian state space models. I establish that the associated maximum likelihood estimator is strongly consistent, asymptotically normal, and asymptotically efficient. Through simulations I show that the discretization filter is orders of magnitude faster than alternative nonlinear techniques for the same level of approximation error and I provide practical guidelines for applied researchers. I apply my approach to estimate a New Keynesian model with a zero lower bound on the nominal interest rate. After accounting for the zero lower bound, I find that the slope of the Phillips Curve is 0.076, which is less than 1/3 of typical estimates from linearized models. This suggests a strong decoupling of inflation from the output gap and larger real effects of unanticipated changes in interest rates in post Great Recession data.

Discretizing Nonlinear, Non-Gaussian Stochastic Processes with Exact Conditional Moments (joint with Alexis Akira Toda) (pdf) (SSRN Working Paper)

Quantitative Economics, 2017

Abstract: Approximating stochastic processes by finite-state Markov chains is useful for reducing computational complexity when solving dynamic economic models. We provide a new method for accurately discretizing general Markov processes by matching low order moments of the conditional distributions using maximum entropy. In contrast to existing methods, our approach is not limited to linear Gaussian autoregressive processes. We apply our method to numerically solve asset pricing models with various underlying stochastic processes for the fundamentals, including a rare disasters model. Our method outperforms the solution accuracy of existing methods by orders of magnitude, while drastically simplifying the solution algorithm. The performance of our method is robust to parameters such as the number of grid points and the persistence of the process.

 

Research in Progress

  • Estimating High-Dimensional State Space Models

  • What Does the Market Think? (joint with Daniel Murphy and Kieran Walsh)

  • Disagreement About the Term Structure of Inflation Expectations (draft coming soon) (joint with Hie Joo Ahn)

  • Forecast Anomalies and Parameter Uncertainty