Messi H.J. Lee
Washington University in St. Louis. University City. Missouri.

I’m a fifth year PhD Candidate in the Division of Computational and Data Sciences at Washington University in St. Louis (WashU). I am part of the Psychological & Brain Sciences Track, doing research at the Diversity Science Lab. My advisors are Calvin K. Lai in the Department of Psychology at Rutgers University and Jacob M. Montgomery in the Political Sciences Department at WashU.
My research focuses on applying text-as-data methodologies to measure stereotyping in Large- and Vision-Language Models. Recently, I’ve shifted my attention to measuring implicit bias in reasoning models. In this work, we discovered that reasoning models require significantly more computational resources to process association-incompatible pairings (e.g., men-family & women-career) compared to association-compatible ones (e.g., men-career & women-family). This finding suggests that even advanced AI systems exhibit processing patterns analogous to human implicit bias.
After completing my PhD in May of 2025, I intend to fulfill my military service obligation in South Korea as a Technical Research Personnel (전문연구요원), preferably continuing research in the areas of AI Bias and Computational Social Science. If there are any relevant opportunities I should know about, please feel free to send me an email at: hojunlee[at]wustl[dot]edu.
news
Mar 17, 2025 | The pre-print of my new paper “Implicit Bias-Like Patterns in Reasoning Models” is now available on ArXiv. We find that reasoning models (i.e., o3-mini) consume substantially more reasoning tokens when processing association-incompatible information compared to association-compatible information. This pattern parallels the delayed response times humans exhibit when processing association-incompatible information in the Implicit Association Test (IAT). |
---|---|
Mar 10, 2025 | The pre-print of my new paper “Visual Cues of Gender and Race are Associated with Stereotyping in Vision-Language Models” is now available on ArXiv. We find that gender prototypicality is linked to greater homogeneity of group representations in VLM-generated texts. |
Feb 03, 2025 | The pre-print of my new paper “Homogeneity Bias as Differential Sampling Uncertainty in Language Models” is now available on ArXiv. We find that that certain Vision-Language Models exhibit significantly more deterministic token sampling patterns when processing marginalized groups compared to dominant groups. This finding suggests a potential mechanism underlying homogeneity bias in language models. |
Jan 09, 2025 | I published my very first blog post about my research on homogeneity bias in AI. You can find it on my website as a blog entry or you can read it as a LinkedIn Article. |
Oct 11, 2024 | I received a $1,000 Small Grant from the Center for the Study of Race, Ethnicity & Equity to expand my research on homogeneity bias in Large Language Models. This study will explore a broader range of models and parameter settings to examine when the bias appears in model outputs. |