We're hiring! Fully-funded PhD positions in Human & Machine Intelligence (Neuro & AI) at Boston College

How does the brain support adaptive decision making in the real world?

Recent advances in AI have provided us with models that rival humans on challenging naturalistic tasks and can serve as starting points for tackling this question head-on. At the same time, as frontier AI labs push the limits of scaling laws, many doubt whether more data and compute alone will lead to human-level learning, planning, and generalization.

The new Human & Machine Intelligence Lab at Boston College is recruiting 1-2 PhD students to work on reverse-engineering naturalistic learning and decision making in the brain. Specifically, we aim to understand:

  1. How the human brain learns internal models of complex, naturalistic environments.

  2. How it uses these models to plan toward distant goals.

  3. How it generalizes this knowledge to new environments.

Building on recent work in theory-based RL [1], you will tackle these questions by leveraging state-of-the-art AI models (e.g., DDQN, MuZero, LLMs/VLMs) to analyze behavioral, fMRI, and MEG data from human subjects engaging in rich tasks, such as learning to play new video games. You will also have the opportunity to design and conduct experiments (behavior/fMRI/MEG) to test your hypotheses.

The lab is led by Momchil Tomov (starting in Fall 2026) and is joint between the Department of Psychology & Neuroscience and the Department of Computer Science.

Why Boston College?

Boston College is an elite R1 research institution in the heart of the Boston metropolitan area. Greater Boston is a powerhouse of innovation, home to over 35 colleges and universities โ€“ including Harvard and MIT โ€“ and a thriving ecosystem of AI & biotech startups. As a PhD student, you will be immersed in this vibrant research community while enjoying the benefits of living in a diverse, bustling metropolis. For the outdoors-inclined, New England offers scenic opportunities to escape city life: from sailing on the Charles River, to hiking or skiing in the White Mountains, to surfing off the shores of Rhode Island, to enjoying freshly caught oysters on Cape Cod.

Position Details

Application

  • Deadline: December 15, 2025
  • Department of Psychology & Neuroscience: [APPLY HERE]
  • Department of Computer Science: [APPLY HERE]

Requirements

The ideal candidate has experience with state-of-the-art RL models/LLMs/VLMs and/or experience analyzing behavioral/neural data. Experience collecting fMRI/MEG data is a plus.

Please do not hesitate to reach out with questions! We also encourage you to forward this to anyone who might be interested.

References

[1] Tomov, M. S., Tsividis, P., Pouncy, T., Tenenbaum, J. B., Gershman, S. J. (2023). “The neural architecture of theory-based reinforcement learning.” Neuron 111 (2): 454-469. https://doi.org/10.1016/j.neuron.2023.01.023

December 2025 · Momchil Tomov

New preprint: TreeIRL combines search with ML for safe and human-like autonomous driving in the real world

New preprint from our autonomy research area!

TreeIRL is a novel planner that combines classical search with learning-based methods to achieve state-of-the-art performance in simulation and in real-world autonomous driving! ๐Ÿš˜ ๐Ÿค– ๐Ÿš€

๐Ÿ’ก The key idea is to use Monte Carlo tree search (MCTS) to find a promising set of safe candidate trajectories and inverse reinforcement learning to choose the most human-like trajectory among them.

Why this matters:

๐Ÿ›ฃ๏ธ First real-world evaluation of MCTS-based planner on public roads.

๐Ÿ“Š Comprehensive comparison across simulation and 500+ miles of urban driving in Las Vegas.

๐Ÿ† Outperforms classical + SOTA planners, balancing safety, progress, comfort, and human-likeness.

๐Ÿงฉ Flexible framework that can be extended with imitation learning and reinforcement learning.

โ€ผ๏ธ Underscores importance of diverse metrics and real-world evaluation.

–> read more about it here.

#AI #Robotics #MachineLearning #SelfDrivingCars #AutonomousVehicles #MotionPlanning

September 2025 · Momchil Tomov

Patch foraging paper accepted to Neuron

๐Ÿญ๐Ÿ”โšก๐Ÿง  Our paper studying the neural substrates of patch foraging decisions – whether to stay on a patch or leave it for a more promising one – was accepted to Neuron!

July 2025 · Momchil Tomov

Paper on policy reuse published in PLOS Biology

๐Ÿง  Our paper studying how the brain reuses strategies from previous tasks to solve novel tasks was published in PLOS Biology. This follows up on our previous work on multi-task reinforcement learning in humans, where we first reported behavioral evidence for such policy reuse.

June 2025 · Momchil Tomov

Lab2Car paper accepted to ICRA 2025

๐Ÿš— Excited to share that our Lab2Car paper was accepted to ICRA 2025! See you in Atlatna!