Eunkyu (Eunice) Park

λ°•μ€κ·œ

Hello!πŸ‘‹πŸ» I am a Ph.D. Candidate in Artificial Intelligence at Seoul National University (SNUVL), advised by Gunhee Kim. Currently, I am a Visiting Researcher at CMU Human Computer Interaction Institute (HCII) with Motahhare Eslami, and work closely with Maarten Sap at CMU Language Technologies Institute (LTI).

My research centers on the trustworthiness and interpretability of multimodal AI systems β€” spanning bias-driven hallucinations, multimodal moral reasoning, and better scaffolded reasoning to evaluate and calibrate user trust. I design benchmarks, annotation pipelines, and preference learning techniques to improve transparency, safety, and alignment of multimodal systems.

In my free time I play golf β›³, watching football ⚽🩡, and good food and wine with good company β˜€οΈ.

Eunice Park
photo
News πŸ“°
Research ✨

My research asks: How can we design multimodal AI systems whose reasoning is transparent, trustworthy, and genuinely aligned with human values β€” and how do humans perceive, interact with, and sometimes misplace trust in these systems? I pursue this vision through three interrelated research themes:

01 Trustworthiness & Interpretability of Vision-Language Models
I develop methods to detect and localize failures in VLMs, examining how model biases lead to unfaithful outputs. I also study alignment β€” whether a model's internal reasoning and outputs faithfully reflect human values and intentions, rather than superficially mimicking them.
hallucination detection bias in VLMs alignment
CVPR 2025 HalLoc
02 Human Perception of AI Reasoning
I investigate how people evaluate, trust, and are misled by AI-generated reasoning chains. Using behavioral experiments and real-world deployments, I examine when chain-of-thought explanations genuinely support critical thinking versus when they create false confidence β€” and what interaction designs can encourage more careful human oversight.
chain-of-thought human-AI trust HCI reasoning evaluation
arXiv 2025 Critical or Compliant?
arXiv 2025 CoCoT
03 Multimodal Moral & Social Reasoning
I design benchmarks and preference learning frameworks that capture the continuous, pluralistic nature of human moral judgment across text and image contexts β€” moving beyond binary labels toward richer, more human-aligned supervision signals.
moral reasoning value alignment multimodal benchmarks
arXiv 2026 MM-SCALE
Selected Papers  View all β†’
EducationπŸŽ“