Hello. I am currently a Robot Research Engineer at the Air Force Research Laboratory (AFRL) via AV. At the same time, I am continuing my research with Professors Stefanie Tellex and George Konidaris at Brown University. My research primarily focuses on embodied multimodal AI, robotics, and planning. I previously worked with Professors Hao Zhang and Hong Yu at the University of Massachusetts Amherst on robotics and NLP, respectively.
I'm a 2023 M.S. graduate from the University of Massachusetts Amherst, specializing in Computer Science with a focus on ML. Before that, I graduated with a B.S. in Computer Science in 2022 from the same institution.
Throughout my Bachelor's, I focused on gaining industry experience through internships as a mobile developer and a data engineer. During the Master's program, I worked as a Data Science intern on time series data, honing my skills in predictive analytics and machine learning.
TL;DR: My research develops hybrid neuro-symbolic methods for achieving general-purpose mobile manipulation in robotic assistants. This approach overcomes the brittle generalization and data inefficiency of pure learning methods (like BC) by integrating their strengths with the robustness of classical symbolic planning, focusing on data-efficient and language-conditioned solutions for long-horizon tasks.
------
The growing demand for personalized assistance for an aging and increasingly busy population necessitates the development of general-purpose robotic assistants capable of executing a wide variety of tasks in both home and workplace environments. Think Rosey from The Jetsons. Achieving this goal requires robots that can reliably understand and execute complex, open-vocabulary mobile manipulation tasks based on natural language instructions. This is the ultimate objective of my research.
My research philosophy is driven by a commitment to pragmatism, utilizing the most effective methodologies available, rather than being strictly confined to a single paradigm.
Critique of Pure Learning: Recent popularity in data-driven methods, such as Imitation Learning (IL), particularly Behavior Cloning (BC), offers benefits for specific domains like tabletop and contact-rich manipulation. However, these methods exhibit significant limitations for general-purpose assistance, including brittle generalization, extreme data requirements, and limited success in mobile manipulation. Similarly, pure Reinforcement Learning (RL) requires extensive sample collection, often relying on simulators that pose a challenging sim-to-real transfer problem.
Revisiting Classical Methods: Conversely, classical approaches like symbolic planning offer robustness and guaranteed task completion, provided that accurate environmental and task models are supplied. While historically criticized for a lack of environmental flexibility and the need for expert human modeling, their inherent reliability is a compelling advantage when compared to the generalization failures of current learning-based techniques.
Given the complementary strengths and weaknesses across these approaches, my research focuses on developing hybrid methodologies that strategically combine the best elements of both classical and learning-based paradigms, specifically, neuro-symbolic modular systems, to achieve the required robustness and generalizability.
One of my core interests lies in developing data-efficient methods that generalize effectively to long-horizon mobile manipulation tasks.
My work presented at IROS 2025 demonstrated that state-of-the-art BC mobile manipulation methods fail to generalize effectively to novel environments and previously unseen long-horizon tasks using limited training data. Crucially, a hybrid neuro-symbolic method, developed without requiring any real-world robot data, significantly outperformed the BC baselines.
This finding has motivated a shift in my research focus toward the exploration of modular neuro-symbolic techniques that integrate the strengths of both planning and learning. Current projects involve various strategies for combining these elements. Furthermore, I am deeply interested in improving fundamental components of the "neuro" aspect itself, such as enhancing the data efficiency of Vision-Language-Actions (VLA) models and developing better methods for handling long-term memory and contextual reasoning for language-conditioned robotic systems.
News
[11/2025] Reviewed a paper for ICRA 2026
[10/2025] Presented LAMBDA at IROS 2025
[09/2025] Started as a Robot Research Engineer at the Air Force Research Laboratory (AFRL) via AV
[06/2025] Presented my work at the Air Force Research Laboratory (AFRL)
[06/2025] LAMBDA accepted to IROS 2025
[06/2025] Reviewed three papers for CoRL 2025
[11/2024] Presented LAMBDA at two CoRL 2024 workshops
[10/2024] Reviewed six papers for three CoRL 2024 workshops
[09/2024] Presented CoMMa at NERC 2024
[07/2024] Presented LAMBDA at two RSS 2024 workshops
[06/2024] Reviewed two papers for an RSS 2024 workshop
[06/2024] LAMBDA paper accepted to two RSS 2024 workshops
[11/2023] Robotics paper accepted to CoRL 2023 Workshop
[11/2023] Presented LAMBDA at NERC 2023
[07/2023] Began working as a Robotics Research Assistant at Brown University
[05/2023] Graduated with a Master of Science in Computer Science from the University of Massachusetts Amherst
[09/2022] Concluded UKG internship
[06/2022] Joined UKG as a Data Science intern
[02/2022] Continued at the University of Massachusetts Amherst as a Computer Science Master's student concentrating in ML
[01/2022] Graduated with a Bachelor of Science in Computer Science from the University of Massachusetts Amherst
[08/2021] Concluded The Hartford internship
[05/2021] Joined The Hartford as a Data Engineer intern
[03/2021] Concluded iSmile Technologies internship
[11/2020] Joined iSmile Technologies as a Mobile Developer & Team Lead intern
[09/2018] Joined the University of Massachusetts Amherst as Computer Science undergraduate student