Algorithmic Approximations to Optimal Behavior


Description

In the real world, planning and decision making often take place under strict computational and time constraints. This makes optimal behavior unattainable beyond the simplest cases. In another strand of research, I investigate the algorithmic strategies that humans employ to approximate the optimal solution in a variety of domains.


Related Papers

Neural evidence for strategy reuse

Neural evidence that humans reuse strategies to solve new tasks

How does the brain generalize previously useful strategies to novel tasks?

January 2025 · Sam Hall-McMaster, Momchil S. Tomov, Samuel J. Gershman, Nicolas W. Schuck
Predictive representations

Predictive Representations: Building Blocks of Intelligence

What is the successor representation? What is its role in biological and artificial intelligence?

January 2024 · Wilka Carvalho *, Momchil S. Tomov *, William de Cothi *, Caswell Barry, Samuel J. Gershman
Multi-task reinforcement learning

Multi-task reinforcement learning in humans

How do humans transfer knowledge across different tasks?

June 2021 · Momchil S. Tomov *, Eric Schulz *, Samuel J. Gershman
Neural correlates of uncertainty

Dissociable neural correlates of uncertainty underlie different exploration strategies

How does the brain represent different forms of uncertainty? How do those representations determine exploratory choices?

May 2020 · Momchil S. Tomov, Van Q. Truong, Rohan A. Hundia, Samuel J. Gershman
Hierarchical representations

Discovery of hierarchical representations for efficient planning

Why do humans represent their environments hierarchically? How are these hierarchical representations learned?

April 2020 · Momchil S. Tomov, Samyukta Yagati, Agni Kumar, Wanqian Yang, Samuel J. Gershman