How do children learn new concepts from a few examples? Why does an algorithm that has seen millions of minutes of videos not understand the fundamental properties of objects, but we do so easily? How can infants from arguably 15 months of age understand that others have desires and beliefs that could be different from theirs, while autonomous agents trained for more than all of human existence fail to do so? I am a research scientist at NYU working with Prof. Brenden Lake and Prof. Moira Dillon at the Human & Machine Learning Lab on understanding human cognitive abilities in an effort to advance current AI algorithms.
M.S. in ECE, 2020
New York University
B.Tech. in Electrical Engineering, 2018
IIT Kanpur
To achieve human-like thinking and performance, machine learning systems need to comprehend the autonomous agents in their environment. Inspired by work in developmental psychology, we present challenges for machines that would test their abilities to make theory of mind inferences. Similar to developmental studies, we use a violation of expectation paradigm where the machine must predict the plausibility of a video sequence. We further present a few baselines to understand the challenges that might be faced in achieving a good score on this benchmark.
Children use the mutual exclusivity (ME) bias to learn new words, while standard neural nets show the opposite bias, hindering learning in naturalistic scenarios such as lifelong learning.