Machine Learning Research Scientist \\ Adjunct Professor \\ Brain Computer-Interface Researcher
Leading research in applying LLMs to command and control planning, with focus on human-guided AI systems that can be steered and aligned through human feedback.
Developing algorithms that learn from human feedback, demonstrations, and interventions to create more intuitive and controllable AI systems for complex decision-making tasks.
Exploring deep reinforcement learning techniques for multi-agent coordination, human-robot collaboration, and autonomous systems in tactical environments.
Generative pre-trained transformers for accelerated Course of Action development in military operations. A groundbreaking application of LLMs for Command and Control planning.
Large language models for accelerated plan of action development in disaster response scenarios. Applying AI to critical emergency response planning.
A dataset for prototyping spatial reasoning methods for multi-agent environments. Enables research in AI-driven strategic decision-making.
Novel approach to learning from human demonstrations in Minecraft environments. Bridges the gap between human intuition and AI learning.
Published at AAAI 2024. A novel framework for incorporating human ratings into reinforcement learning algorithms.
Development of practical visual evoked potential based brain-computer interfaces. My PhD work focused on creating practical BCIs for real-world applications.