The physical world is inherently non-rigid and dynamic. However, many modern robotic modeling and perception stacks assume rigid and static environments, limiting their robustness and generality in the real world. Non-rigid objects such as ropes, cloth, plants, and soft containers are common in daily life, and many environments, including sand, fluids, flexible structures, and dynamic scenes, exhibit deformability and history-dependency that challenges traditional assumptions in robotics.
This workshop comes at a pivotal moment: advances in foundation models, scalable data collection, differentiable physics, and 3D modeling and reconstruction create new opportunities to represent and interact in non-rigid, dynamic worlds. At the same time, real-world applications increasingly demand systems for handling soft, articulated, or granular dynamic objects. The workshop will convene researchers from robotics, computer vision, and machine learning to tackle shared challenges in perception, representation, and interaction in non-rigid worlds. By surfacing emerging solutions and promoting cross-disciplinary collaboration, the workshop aims to advance the development of more generalizable models grounded in data and physics for real-world robotic interaction.
9:30 - 9:35 | Introduction and Opening Remarks |
9:35 - 10:00 | Speaker 1 |
10:00 - 10:30 | Spotlight Session 1 & Poster Overview |
10:30 - 11:00 | Coffee Break & Poster Sessions |
11:00 - 11:25 | Speaker 2 |
11:25 - 11:35 | Spotlight Session 2 |
11:35 - 12:00 | Speaker 3 |
12:00 - 12:30 | Panel Discussion & Debate |