Virginia Robotics Symposium


November 14th, 2025

Link Lab
Charlottesville, VA

About

The Virginia Robotics Symposium (VRS) brings together robotics researchers from across the state of Virginia. VRS is a one-day event with research talks, poster presentations, and social activities. The goal of VRS is to build connections, disseminate research ideas, and facilitate the creation of large, multi-institutional, and interdisciplinary teams across the Commonwealth.

Registration

Attendance is free. Click here to register!

Poster Presentations

Students are encouraged to present research posters. Click here to sign up for a poster presentation!

Organizing Institutions


Program

VRS will be held as a one-day event with keynote talks, panel debates, poster presentations, and social activities. This year's symposium takes place on November 14th from 9:30AM to 5PM. The abstracts for each speaker are provided in Talks.

Faculty Speakers

Madhur Behl

University of Virginia

Bringing AI Up To Speed:
Moving Virginia Robotics Autonomously @ 184 mph

Simon Stepputtis

Virginia Tech

Towards Intelligent Collaborative Robots:
Joining Neural Inference with Symbolic Reasoning

Jana Kosecka

George Mason University

Bridging Semantics and Spatial Reasoning:
Opportunities and Challenges of
Large Vision-Language Models in Robotics

Suyi Li

Virginia Tech

Folding Intelligent Machines:
Exploiting Geometry and Mechanics of Origami
to Build Intelligence in the Mechanical Domain

Yen-Ling Kuo

University of Virginia

Teaching Robots the Human Way:
Learning Grounded Representations to
Manipulate Objects and Interact with Humans

Schedule

Speaker times and topics will be posted as they are confirmed.

Time Schedule Location
9:30 AM Sign In and Morning Coffee Link Lab
10:00 AM Welcoming Remarks Link Lab
10:10 AM Madhur Behl Link Lab
10:35 AM Simon Stepputtis Link Lab
11:00 AM Jana Kosecka Link Lab
11:25 AM Lunch Boxes Provided
1:00 PM Poster Session and Lab Tours Rice Davis Commons
3:00 PM Afternoon Coffee Link Lab
3:20 PM Suyi Li Link Lab
3:45 PM Yen-Ling Kuo Link Lab
4:10 PM Panel Debate Link Lab
4:45 PM Awards and Closing Remarks Link Lab

Venue

The 2025 symposium will be held at the Link Lab: University of Virginia, Olsson Hall, Charlottesville, VA 22903.

Contact

For more information, please contact Dylan Losey (losey@vt.edu), Nicola Bezzo (nbezzo@virginia.edu), or Xuesu Xiao (xiao@gmu.edu).

Dylan Losey

Virginia Tech

Nicola Bezzo

University of Virginia

Xuesu Xiao

George Mason

Talks

Madhur Behl

Bringing AI Up To Speed: Moving Virginia Robotics Autonomously @ 184 mph

Despite decades of advancement, autonomous driving systems have not met the high expectations set by many. What’s missing is physical intelligence - the ability of AI systems to reason, react, and adapt in real time, while operating safely and effectively within the laws of physics. In this talk, I will first examine which hurdles have turned out to be more formidable than expected, and share our research on how to refine testing methodologies to advance the safety of autonomous vehicles. I will then show how high-speed autonomous racing provides a unique proving ground to test the boundaries of AI’s physical capabilities. Leveraging more than a decade of experience in high-speed autonomous racing, particularly with the full-scale Cavalier Autonomous Racing Indy car and the F1tenth platform, I will demonstrate how racing at high speeds and in close proximity to other vehicles exposes unsolved challenges in perception, planning, and control. I will recount our journey from the lab to lap times, and the rigorous engineering required to build a full-scale autonomous racecar from scratch. Despite progress, autonomous racing has yet to match the skill of expert human drivers or master the complexity of dense, multi-car competition; indicating that we still have several more laps to go on our path toward artificial general “driving” intelligence.


Simon Stepputtis

Towards Intelligent Collaborative Robots: Joining Neural Inference with Symbolic Reasoning

Deep learning has driven remarkable progress in robotics particularly through the use of large foundational models. Despite their success, these models demand extensive computational resources and struggle with the adaptable reasoning needed for intelligent robots to operate in ever-changing human-centric environments. In this talk, I will discuss how neuro-symbolic methods address these challenges by integrating the expressive power of neural networks with the reasoning capabilities of symbolic approaches to advance robot intelligence. This hybrid approach allows for small, yet effective models that leverage symbolic reasoning as an integral part of anticipating human actions, as well as providing a structured approach for zero-shot interaction in new situations and with human partners. Finally, I conclude by outlining how these methods pave the way for intelligent robots in complex, real-world applications while addressing key challenges in life-long learning and efficiency.


Jana Kosecka

Bridging Semantics and Spatial Reasoning: Opportunities and Challenges of Large Vision-Language Models in Robotics

Enabling robots to understand, reason and act in their surrounding environment require reliable and expressive representations of the environment that require different abstraction levels for different tasks. Recent advances in large vision and language models (LVLMs) and vision-language-action (VLA) models have demonstrated impressive generalization capabilities across diverse manipulation and navigation tasks specified in natural language, owing their capacity to encode rich semantic knowledge and commonsense priors. Despite these advancements LVLMs’ exhibit limited spatial awareness and insufficiently precise action grounding in physical environment. I will discuss the opportunities and challenges associated with using LVLMs and discuss how these improved their capabilities can be applied to object search and instruction following for long range navigation tasks.


Suyi Li

Folding Intelligent Machines: Exploiting Geometry and Mechanics of Origami to Build Intelligence in the Mechanical Domain

Over the past four centuries, origami—the ancient art of paper folding—has evolved from a simple recreational craft into a powerful engineering framework for creating functional materials and robotic systems. This talk will highlight our recent efforts to build origami-inspired robots that can crawl and manipulate like animals, or grow and adapt like trees. In particular, we will explore how the mechanics of origami can be harnessed to embed intelligent behaviors directly into the robot’s physical body. Examples include using multi-ability to sequence earthworm-like peristaltic locomotion without digital controllers, leveraging physical wobbling dynamics to classify objects without cameras, and exploiting collective folding behaviors to grasp irregularly shaped objects. These examples illustrate how geometry and mechanics can open new avenues for computation and robotic control.


Yen-Ling Kuo

Teaching Robots the Human Way: Learning Grounded Representations to Manipulate Objects and Interact with Humans

For robots to robustly and flexibly interact with humans, they need to learn to plan and execute actions across diverse scenarios. However, in many scenarios, planning actions seems to be effortless for humans but very difficult for robots. Inspired by how humans plan actions, in this talk, I will explore how to build latent representations grounded by behaviors and to associate multimodal semantic representations to support efficient and robust planning and reasoning. I will use my work in robotic manipulation, human-AI collaboration, and theory of mind reasoning to demonstrate these ideas.