Our research tackles fundamental puzzles about how systems of natural and artificial intelligence learn about, remember, and reason with relational structures. How does the mind encode relations in experience, combine simple concepts to produce infinitely new meanings, and draw analogies across diverse situations? How are these abilities made possible by neural mechanisms in the human brain? And in what ways can they be implemented in artificial neural networks? We study these problems jointly in minds, brains and machines using behavior, fMRI, computational modeling, and neural network interpretability.
Lab News
Oct 11, 2024
Our work on the non-linearity of relational concept combination in large language models is accepted at the compositional learning workshop at NeurIPS 2024! Read the paper here.
Sept 30, 2024
Em Smullen joins the lab as a graduate student and Michael McCoy joins as a research fellow. Welcome!