Messi H.J. Lee

Technical Research Personnel. Seoul. Republic of Korea.

prof_pic.jpg

Hi! I’m Messi H.J. Lee. In May of 2025, I completed my PhD in Computational and Data Sciences at Washington University in St. Louis (WashU). During my PhD, I was adivsed by two amazing advisors–Calvin K. Lai in the Department of Psychology at Rutgers University and Jacob M. Montgomery in the Political Sciences Department at WashU.

My research focuses on the intersection between Artificial Intelligence and social psychology where I evaluate stereotyping in AI models. I recently completed work looking at implicit bias in reasoning models where we discover that reasoning models require significantly more computational resources to process association-incompatible information (e.g., men-family & women-career) compared to association-compatible information (e.g., men-career & women-family). This finding suggests that even advanced AI systems exhibit processing patterns analogous to human implicit bias.

I am currently in South Korea fulfilling my military service obligation as a technical research personnel (전문연구요원) developing AI models for pathology and doing research related to bias in pathology-related AI models. If you would like collaborate on topics related to AI bias and stereotyping, feel free to reach out to me at hojunlee1012[at]gmail[dot]com.

news

Jun 27, 2025 I started my military service as a technical research personnel (전문연구요원) at MTS Company, a company developing AI-powered diagnostic solutions in digital pathology. My work focuses on developing AI models and conducting research on algorithmic bias in medical AI applications.
Apr 10, 2025 I’m officially Dr. Lee now! I successfully defended my dissertation titled “Stereotyping in Language (Technologies): An Examination of Racial and Gender Stereotypes in Natural Language and Language Models.” You can watch my dissertation defense here. Thanks to everyone who supported me along this journey!
Mar 17, 2025 The pre-print of my new paper “Implicit Bias-Like Patterns in Reasoning Models” is now available on ArXiv. We find that reasoning models (i.e., o3-mini) consume substantially more reasoning tokens when processing association-incompatible information compared to association-compatible information. This pattern parallels the delayed response times humans exhibit when processing association-incompatible information in the Implicit Association Test (IAT).
Mar 10, 2025 The pre-print of my new paper “Visual Cues of Gender and Race are Associated with Stereotyping in Vision-Language Models” is now available on ArXiv. We find that gender prototypicality is linked to greater homogeneity of group representations in VLM-generated texts.
Feb 03, 2025 The pre-print of my new paper “Homogeneity Bias as Differential Sampling Uncertainty in Language Models” is now available on ArXiv. We find that that certain Vision-Language Models exhibit significantly more deterministic token sampling patterns when processing marginalized groups compared to dominant groups. This finding suggests a potential mechanism underlying homogeneity bias in language models.

selected publications

  1. Implicit Bias-Like Patterns in Reasoning Models
    Messi H. J. Lee , and Calvin K. Lai
    Mar 2025
  2. Vision-Language Models Generate More Homogeneous Stories for Phenotypically Black Individuals
    Messi H. J. Lee , and Soyeon Jeon
    Mar 2025
  3. Large Language Models Portray Socially Subordinate Groups as More Homogeneous, Consistent with a Bias Observed in Humans
    Messi H.J. Lee , Jacob M. Montgomery , and Calvin K. Lai
    In The 2024 ACM Conference on Fairness, Accountability, and Transparency , Jun 2024