Theory-based reinforcement learning neural architecture

The neural architecture of theory-based reinforcement learning

How does the brain build mental models of rich, dynamic domains, such as video games?

January 2023 · Momchil S. Tomov, Pedro Tsividis, Thomas Pouncy, Joshua B. Tenenbaum, Samuel J. Gershman
Corticostriatal learning

Causal Inference Gates Corticostriatal Learning

How do causal inferences shape reward-based learning in the brain?

August 2021 · Hayley M. Dorfman *, Momchil S. Tomov *, Bernice Cheung, Dennis Clarke, Samuel J. Gershman *, Brent L. Hughes *
Multi-task reinforcement learning

Multi-task reinforcement learning in humans

How do humans transfer knowledge across different tasks?

June 2021 · Momchil S. Tomov *, Eric Schulz *, Samuel J. Gershman
Neural correlates of uncertainty

Dissociable neural correlates of uncertainty underlie different exploration strategies

How does the brain represent different forms of uncertainty? How do those representations determine exploratory choices?

May 2020 · Momchil S. Tomov, Van Q. Truong, Rohan A. Hundia, Samuel J. Gershman
Hierarchical representations

Discovery of hierarchical representations for efficient planning

Why do humans represent their environments hierarchically? How are these hierarchical representations learned?

April 2020 · Momchil S. Tomov, Samyukta Yagati, Agni Kumar, Wanqian Yang, Samuel J. Gershman