About Me

I am an aspiring PhD student in Computer Science. I am particularly interested in:

  • How do deep learning (DL) models learn the world through observations? (world models)
  • What is the shared geometric structure of representation spaces of different DL models?
  • How can we seamlessly synthesize language and model-based RL based on the understanding of representation learning?

Therefore, my main focus of study is the interpretability of LLMs, VLMs, and model-based RL.

These are my favorite papers. I wish you can gauge the taste of my interests from them:

Research Interests

  • Representation learning
  • Interpretability of LLMs, VLMs, and model-based RL
  • Language-grounding in model-based RL

Current Research

This fall, I research the interpretability and moral reasoning of LLMs at Relational Cognition Lab at University of California, Irvine. Previously, I researched model-free RL at the University of Maryland, College Park (NeurIPS 2025 ARLET Workshop), narrative representations of LLMs at Soka University of America (NeurIPS 2025 LLM-Evaluation Workshop), VLMs for emotion recognition at Texas State University (IEEE UEMCON 2024), and spatiotemporal understanding in model-based RL under the University of Tokyo.