About Me

I am an aspiring PhD student in Computer Science. I am particularly interested in:

  • How do deep learning (DL) models learn the world through observations? (world models)
  • What is the shared geometric structure of representation spaces of different DL models?
  • How can we seamlessly synthesize language and model-based RL based on the understanding of representation learning?


These are some of my favorite papers. I hope you can gauge the taste of my interests from them:

Research Interests

  • Representation learning
  • Interpretability
  • World models

Current Research

Although I summarized my interests above, my passion spans to the science of deep learning, language grounding in world models, contrastive learning, and more. In general, I am interested in how representations are structured in neural networks. This fall, I am working on the interpretability and moral reasoning of LLMs at Relational Cognition Lab at University of California, Irvine. Previously, I researched model-free RL at the University of Maryland, College Park (NeurIPS 2025 ARLET Workshop), narrative representations of LLMs at Soka University of America (NeurIPS 2025 LLM-Evaluation Workshop), VLMs for emotion recognition at Texas State University (IEEE UEMCON 2024), and spatiotemporal understanding in model-based RL at the University of Tokyo.