How do children learn new concepts from a few examples? Why does an algorithm that has seen millions of minutes of videos not understand the fundamental properties of objects, but we do so easily? How can infants from arguably 15 months of age understand that others have desires and beliefs that could be different from theirs, while autonomous agents trained for more than all of human existence fail to do so? I am currently working with Prof. Brenden Lake at the Human & Machine Learning Lab on understanding human cognitive abilities in an effort to advance current AI algorithms.
“You know nothing of future time and yet in my teeming circuitry I can navigate infinite delta streams of future probability and see that there must one day come a computer whose merest operational parameters I am not worthy to calculate, but which it will be my eventual fate to design.”
M.S. in ECE, 2020 (Expected)
New York University
B.Tech. in Electrical Engineering, 2018
Children use the mutual exclusivity (ME) bias to learn new words, while standard neural nets show the opposite bias, hindering learning in naturalistic scenarios such as lifelong learning.