In the real world, planning and decision making often take place under strict computational and time constraints. This makes optimal behavior unattainable beyond the simplest cases. In another strand of research, I investigate the algorithmic strategies that humans employ to approximate the optimal solution in a variety of domains.
Related Papers
Neural evidence that humans reuse strategies to solve new tasks
How does the brain generalize previously useful strategies to novel tasks?
Predictive Representations: Building Blocks of Intelligence
What is the successor representation? What is its role in biological and artificial intelligence?
Multi-task reinforcement learning in humans
How do humans transfer knowledge across different tasks?
Dissociable neural correlates of uncertainty underlie different exploration strategies
How does the brain represent different forms of uncertainty? How do those representations determine exploratory choices?
Discovery of hierarchical representations for efficient planning
Why do humans represent their environments hierarchically? How are these hierarchical representations learned?