Logo

Beyond Rigid Worlds: Representing and Interacting with Non-Rigid Objects Workshop

Seoul, Korea | September 27, 2025

Overview

The physical world is inherently non-rigid and dynamic. However, many modern robotic modeling and perception stacks assume rigid and static environments, limiting their robustness and generality in the real world. Non-rigid objects such as ropes, cloth, plants, and soft containers are common in daily life, and many environments, including sand, fluids, flexible structures, and dynamic scenes, exhibit deformability and history-dependency that challenges traditional assumptions in robotics.

This workshop comes at a pivotal moment: advances in foundation models, scalable data collection, differentiable physics, and 3D modeling and reconstruction create new opportunities to represent and interact in non-rigid, dynamic worlds. At the same time, real-world applications increasingly demand systems for handling soft, articulated, or granular dynamic objects. The workshop will convene researchers from robotics, computer vision, and machine learning to tackle shared challenges in perception, representation, and interaction in non-rigid worlds. By surfacing emerging solutions and promoting cross-disciplinary collaboration, the workshop aims to advance the development of more generalizable models grounded in data and physics for real-world robotic interaction.

Discussion Topics

  • How might we learn to robustly perceive, reconstruct, and represent non-rigid objects in 3D, particularly from sparse or noisy sensor data?
  • How might simulation tools, foundation models, and scene-specific reconstruction methods (e.g., 3D Gaussian splatting) be used to represent non-rigid, dynamic worlds?
  • How might we actively, interactively, or adaptively perceive the world to reveal highly-uncertain, history-dependent object or environment properties?
  • How might we achieve reliable robot manipulation and interaction in complex, real-world scenarios involving non-rigid objects with varying topology, material properties, and appearance?
  • How might we design representation and perception strategies to handle complex object appearance or material properties such as translucency (e.g., glass), high reflectance (e.g., metal), or particulate behavior (e.g., sand)?

Call For Papers

We invite extended abstracts of up to 5 pages (excluding references, acknowledgments, limitations, and appendix) formatted in the CoRL template and submitted via the RINO OpenReview console.

Best Paper and Best Poster Awards: The workshop will recognize outstanding contributions with Best Paper and Best Poster awards. Award amounts will be announced at a later date.

Submissions will be reviewed in a double-blind process by workshop organizers and attendees. Each submission must nominate one reviewer to evaluate one other contribution, following the CoRL main track's reciprocal review model. Accepted abstracts will be published on the OpenReview page (no DOI/non-archival) and presented as posters. Selected works will be invited for spotlight presentations.

We encourage submission of in-progress work and extensions of previously published material; originality is welcome but not required. However, workshop paper versions of papers accepted to the CoRL 2025 main track are not permitted. We strongly encourage in-person participation of at least one author in the workshop. The timezone for all deadlines is Anywhere on Earth (AoE).

Paper Submission Opens Aug 1
Paper Submission Deadline Aug 21 Aug 24
Review Period Aug 25 - Sep 5
Author Notification Sep 8
Camera-ready Deadline Sep 17




Invited Speakers

Jeannette Bohg
Jeannette Bohg
Stanford University

Short Bio: Jeannette Bohg is an Assistant Professor of Computer Science at Stanford University. She was a group leader at the Autonomous Motion Department (AMD) of the MPI for Intelligent Systems until September 2017. Before joining AMD in January 2012, Professor Bohg earned her Ph.D. at the Division of Robotics, Perception and Learning (RPL) at KTH in Stockholm. In her thesis, she proposed novel methods towards multi-modal scene understanding for robotic grasping. She also studied at Chalmers in Gothenburg and at the Technical University in Dresden where she received her Master in Art and Technology and her Diploma in Computer Science, respectively. Her research focuses on perception and learning for autonomous robotic manipulation and grasping. She is specifically interested in developing methods that are goal-directed, real-time and multi-modal such that they can provide meaningful feedback for execution and learning. Professor Bohg has received several Early Career and Best Paper awards, most notably the 2019 IEEE Robotics and Automation Society Early Career Award and the 2020 Robotics: Science and Systems Early Career Award.
Talk Title: Fine Sensorimotor Skills for Using Tools, Operating Devices, Assembling Parts, and Manipulating Non-Rigid Objects


Yunzhu Li
Yunzhu Li
Columbia University

Short Bio: Yunzhu Li is an Assistant Professor of Computer Science at Columbia University where he leads the Robotic Perception, Interaction, and Learning Lab (RoboPIL). Prior to joining Columbia, he was an Assistant Professor in the Department of Computer Science at the University of Illinois Urbana-Champaign. He completed a postdoctoral fellowship at the Stanford Vision and Learning Lab, working with Fei-Fei Li and Jiajun Wu. He earned his Ph.D. from the Computer Science and Artificial Intelligence Laboratory (CSAIL) at MIT, advised by Antonio Torralba and Russ Tedrake, and his bachelor’s degree from Peking University in Beijing. Professor Li's work is distinguished by best paper awards at ICRA and CoRL and research and innovation awards from Amazon and Sony.
Talk Title: Simulating and Manipulating Deformable Objects with Structured World Models


Siyuan Huang
Siyuan Huang
Beijing Institute for General Artificial Intelligence (BIGAI)

Short Bio: Siyuan Huang is a Research Scientist at the Beijing Institute for General Artificial Intelligence (BIGAI), directing the Embodied Robotics Center and BIGAI-Unitree Joint Lab of Embodied AI and Humanoid Robot. He received his Ph.D. from the Department of Statistics at the University of California, Los Angeles (UCLA). His research aims to build a general agent capable of understanding and interacting with 3D environments like humans. To achieve this, his work made contributions in (i) developing scalable and hierarchical representations for 3D reconstruction and semantic grounding, (ii) modeling and imitating human interactions with 3D world, and (iii) building robots proficient in interactions within the 3D world and with humans.
Talk Title: Learning to Build Interactable Replica of Articulated World

Event Schedule (tentative)

9:30 - 9:35 Introduction and Opening Remarks
9:35 - 10:00 Speaker 1: Jeannette Bohg
10:00 - 10:30 Spotlight Session 1 & Poster Overview
10:30 - 11:00 Coffee Break & Poster Sessions
11:00 - 11:25 Speaker 2: Yunzhu Li
11:25 - 11:35 Spotlight Session 2
11:35 - 12:00 Speaker 3: Siyuan Huang
12:00 - 12:30 Panel Discussion




Organizers


Holly Dinkel
Holly Dinkel
University of Illinois Urbana-Champaign
Marcel Büsching
Marcel Büsching
KTH Royal Institute of Technology
Jad Abou-Chakra
Jad Abou-Chakra
Queensland University of Technology
Alessio Caporali
Alessio Caporali
University of Bologna
Bardienus Pieter Duisterhof
Bardienus Pieter Duisterhof
Carnegie Mellon University
Alberta Longhini
Alberta Longhini
KTH Royal Institute of Technology
Jens Lundell
Jens Lundell
University of Turku
Priya Sundaresan
Priya Sundaresan
Stanford University
Mingrui Yu
Mingrui Yu
Tsinghua University
Kaifeng Zhang
Kaifeng Zhang
Columbia University

Sponsors

Contact: For questions, please contact Marcel Büsching (busching@kth.se).
Website template from LEAP Workshop.