Large language models (LLMs) now perform extremely well on many natural language processing tasks. Their ability to convert legal texts to data may offer empirical legal studies (ELS) scholars a low-cost alternative to research assistants in many contexts. However, less complex computational language models, such as topic modeling and sentiment analysis, are more interpretable than LLMs. In this paper we highlight these differences by comparing LLMs with less complex models on three ELS-related tasks. Our findings suggest that ELS research will—for the time being—benefit from combining LLMs with other techniques to optimize the strengths of each approach.

Citation
Michael A. Livermore, Felix Herron & Daniel Rockmore, Language Model Interpretability and Empirical Legal Studies, Journal of Institutional and Theoretical Economics (2024).