Robust manipulation is an essential capability for autonomous robots as a means to interact with their environments. From assembly lines to warehouses to assisting people in their homes, robots that are capable of manipulating objects have seen a surge of real-world applications in the past four decades.
Despite substantial progress, robustly manipulating objects with unknown or uncertain properties, e.g. object shape, weight, softness or material remains challenging. In order to successfully carry out a given manipulation task, the robot must be able to improve its understanding about these properties to plan a robust strategy to solve the task. To do this, robots use feedback from various proprioceptive and exterioceptive sensor modalities such as vision, tactile and haptic.
In this project we are looking into utilizing a sensor modality that has seen less adoption in robot manipulation: Sound. Audio signals that are generated during the interaction with an object provide a rich source of information the robot can use to infer different properties of the object. For instance, sliding a paper cup across a wooden table has a very different sound profile compared to tapping the surface of a solid metal cylinder.
The goal of this project is to develop motion planning algorithms under uncertainty for robust manipulation tasks that utilize audio feedback as a primary source of sensory information. This includes enabling the robot to plan a strategy to produce different sound profiles (e.g. by sliding an object across a table), use the perceived audio signal to infer object properties, and subsequently plan a strategy to solve the manipulation task at hand.