Currently I am a data scientist for the Research in Artificial Intelligence in Linguistics and Systems (RAILS) team at Vail Systems in Chicago.
In addition to my work as a data scientist, I also continue to collaborate with researchers across several universities, including University of Oregon, University of California, Davis, University of California, San Diego, and University of Texas at Austin. This research broadly aims to understand how language is encoded or represented within both humans and language models.
Pronouns: he/him/his
Preferred name: Zach
Email: znhoughton@gmail.com
In 2019, I graduated cum laude and with departmental honors (under the advisory of Dr. Vsevolod Kapatsinski) from the University of Oregon with a B.A. in linguistics and a minor in Korean.
In 2025, I earned my Ph.D. in Linguistics at the University of California, Davis under the advisory of Dr. Emily Morgan and my M.A. in Psychology under the advisory of Dr. Fernanda Ferreira.
My primary research interests lie in the intersection between error-driven learning and linguistic storage, that is, how the way we learn language affects the way that we represent language cognitively. A cornerstone of my research is the integration of computational modeling with experimental Psycholinguistic methods.
An example of the questions that I am interested in are: How are linguistic representations encoded cognitively? What can Bayesian models and neural networks tell us about how internal representations are learned? How do humans and machines learn abstractions from context-rich inputs?