Our research tackles fundamental puzzles about how systems of natural and artificial intelligence learn about, remember, and reason with relational structures. How does the mind encode relations in experience, combine simple concepts to produce infinitely new meanings, and draw analogies across diverse situations? How are these abilities made possible by neural mechanisms in the human brain? And in what ways can they be implemented in artificial neural networks? We study these problems jointly in minds, brains and machines using behavior, fMRI, computational modeling, and neural network interpretability.
5-minute video from our lab director, Dr Anna Leshinskaya
Lab News
Jul 25, 2025
Symposium announced at CogSci 2025 (San Francisco) on Aug 1, 4:00pm — Cognitively Inspired Interpretability in Large Neural Networks! Check out the speaker list and details here.
July 18, 2025
Em Smullen presents their work on virtue representation at the ICML world models workshop! Check out the paper here.
June 9, 2025
Om Bhatt joins the lab as a junior specialist — welcome!
May 23, 2025
Our lab is awarded a grant from The John Templeton Foundation to investigate moral reasoning in humans and large language models!
March 7, 2025
Our lab is awarded a grant from Schmidt Sciences’ AI Safety Science Program! In collaboration with Seth Lazar (ANU) and Alice Oh (KAIST), our project seeks to empirically understand the cognitive mechanisms implemented in LLMs during morally-guided action decision-making, focusing especially on combinatorial mechanisms. We are hiring for this project.
Feb 28, 2025
Our first cohort of undergraduate research assistants has joined the lab! Meet them here
Oct 11, 2024
Our work on the non-linearity of relational concept combination in large language models is accepted at the compositional learning workshop at NeurIPS 2024! Read the paper here.
Sept 30, 2024
Em Smullen joins the lab as a graduate student and Michael McCoy joins as a research fellow. Welcome!