Natasha Jaques
Natasha Jaques
Awards
Press
Featured
Publications
Topics
Talks
Communities
Light
Dark
Automatic
Social Learning
Basis for Intentions: Efficient Inverse Reinforcement Learning using Past Experience
Using inverse reinforcement learning to infer human preferences is challenging, because it is an underspecified problem. We use multi-task RL pre-training and successor features to learn a strong prior over the space of reasonable goals in an environment—which we call a
basis
—that enables rapidly inferring an expert’s reward function in only 100 samples.
M. Abdulhai
,
Natasha Jaques
,
S. Levine
2022
In
Preprint
PDF
Cite
Code
Project
PsiPhi-Learning: Reinforcement Learning with Demonstrations using Successor Features and Inverse Temporal Difference Learning
PsiPhi-Learning learns successor representations for the policies of other agents and the ego agent, using a shared underlying state representation. Learning from other agents helps the agent take better actions at inference time, and learning from RL experience improves modeling of other agents.
A. Filos
,
C. Lyle
,
Y. Gal
,
S. Levine
,
Natasha Jaques
*
,
G. Farquhar
*
2021
In
International Conference on Machine Learning (ICML)
Oral (top 3% of submissions)
PDF
Cite
Project
ICML talk
Emergent Social Learning via Multi-agent Reinforcement Learning
Model-free RL agents fail to learn from experts present in multi-agent environments. By adding a model-based auxiliary loss, we induce social learning, which allows agents to learn how to learn from experts. When deployed to novel environments with new experts, they use social learning to determine how to solve the task, and generalize better than agents trained alone with RL or imitation learning.
Kamal Ndousse
,
Douglas Eck
,
Sergey Levine
,
Natasha Jaques
2021
In
International Conference on Machine Learning (ICML);
NeurIPS Cooperative AI Workshop
Best Paper
PDF
Cite
Code
Poster
Slides
Cooperative AI talk
ICML talk
Joint Attention for Multi-Agent Coordination and Social Learning
Joint attention is a critical component of human social cognition. In this paper, we ask whether a mechanism based on shared visual attention can be useful for improving multi-agent coordination and social learning.
D. Lee
,
Natasha Jaques
,
J. Kew
,
D. Eck
,
D. Schuurmans
,
A. Faust
2021
In
ICRA Social Intelligence Workshop
Spotlight talk
PDF
Cite
Code
Poster
Video
Human-Centric Dialog Training via Offline Reinforcement Learning
We train dialog models with interactive data from conversations with real humans, using a novel Offline RL technique based on KL-control. Rather than rely on manual ratings, we learn from implicit signals like sentiment, and show that this results in better performance.
Natasha Jaques
*
,
J. H. Shen
*
,
A. Ghandeharioun
,
C. Ferguson
,
A. Lapedriza
,
N. Jones
,
S. Gu
,
R. Picard
2020
In
Empirical Methods in Natural Language Processing (EMNLP)
PDF
Cite
Code
Dataset
Slides
EMNLP Talk
Social and Affective Machine Learning
My PhD Thesis spans both Social Reinforcement Learning and Affective Computing, investigating how affective and social intelligence can enhance machine learning algorithms, and how machine learning can enhance our ability to predict and understand human affective and social phenomena.
Natasha Jaques
2019
In
Massachusetts Institute of Technology
PDF
Cite
Thesis Defense
CV News write-up
Learning via Social Awareness: Improving a Deep Generative Sketching Model with Facial Feedback
We show the outputs of a generative model of sketches to human observers and record their facial expressions. Using only a small number of facial expression samples, we are able to tune the model to produce drawings that are significantly better rated by humans.
Natasha Jaques
,
J. McCleary
,
J. Engel
,
D. Ha
,
F. Bertsch
,
D. Eck
,
R. Picard
2018
In
International Conference on Learning Representations (ICLR) workshop
PDF
Cite
Slides
Quartz article
Cite
×