
Kanishk Gandhi
I am interested in building machines that understand people. I explore topics in reasoning, discovery and interaction.
CS PhD Student, Stanford
Advisor: Noah Goodman
Additional Advisors: Dorsa Sadigh, Tobi Gerstenberg
Previous Affiliations: Brenden Lake, NYU; Moira Dillon, NYU; PathAI; IIT Kanpur
Publications
2025
-
Scaling up the think-aloud method - Oral at Proceedings of the Annual Meeting of the Cognitive Science Society 47, 2025
Arxiv -
Cognitive Behaviors that Enable Self-Improving Reasoners, or, Four Habits of Highly Effective STaRs - Second Conference on Language Modeling, 2025
Arxiv Media Coverage: CNBC -
Big-Math: A Large-Scale, High-Quality Math Dataset for Reinforcement Learning in Language Models - Preprint, 2025
Arxiv -
D3: A Large Dataset for Training Code Language Models to Act Diff-by-Diff - CoLM, 2025
OpenReview -
Non-literal Understanding of Number Words by Language Models - Proceedings of the Annual Meeting of the Cognitive Science Society 47, 2025
Arxiv -
Towards System 2 Reasoning in LLMs: Learning How to Think With Meta Chain-of-Thought - Preprint, 2025
Arxiv -
BoxingGym: Benchmarking Progress in Automated Experimental Design and Model Discovery - Preprint, 2025
Arxiv
2024
-
Surveying the Effects of Quality, Diversity, and Complexity in Synthetic Data from Large Language Models - Preprint, 2024
Arxiv -
Stream of Search (SoS): Learning to Search in Language - First Conference on Language Modeling, 2024
Arxiv -
Human-like Affective Cognition in Foundation Models - Preprint, 2024
Arxiv -
Psychometric Alignment: Capturing Human Knowledge Distributions via Language Models - Preprint, 2024
Arxiv -
Self-Supervised Alignment with Mutual Information: Learning to Follow Principles without Preference Labels - NeurIPS, 2024
Arxiv -
Procedural Dilemma Generation for Evaluating Moral Reasoning in Humans and Language Models - Oral at CogSci, 2024 (NeurIPS Workshop on Moral Philosophy and Moral Psychology, 2023)
Arxiv | NeurIPS Workshop
2023
-
Social Contract AI: Aligning AI Assistants with Implicit Group Norms - Oral at NeurIPS SoLaR Workshop, 2023
Arxiv -
Understanding social reasoning in language models with language models - Spotlight at NeurIPS, 2023
Arxiv| Code -
Certified Deductive Reasoning with Language Models - TMLR, 2023; presented at ICLR 2025
Arxiv | Code -
Strategic Reasoning with Language Models - NeurIPS Workshop on Foundation Models and Decision Making 2023
Arxiv -
Commonsense Psychology in Human Infants and Machines - Cognition, 2023
Media Coverage: NSF Science Now | WNYC Radio/Gothamist | The Daily Beast | The Jerusalem Post | NYU News | Washington Square News | Science Daily -
Intuitions about physical scenes and objects in Virtual Reality (VR) - Proceedings of the Annual Meeting of the Cognitive Science Society
2022
2021
-
Baby Intuitions Benchmark (BIB): Discerning the goals, preferences, and actions of others - NeurIPS, 2021
-
Evaluating infants’ reasoning about agents using the Baby Intuitions Benchmark (BIB) - Proceedings of the Annual Meeting of the Cognitive Science Society
2020
-
Mutual exclusivity as a challenge for deep neural networks - NeurIPS, 2020
Media Coverage: New Scientist