Humans in the loop help robots find their way

Computer scientists’ interactive program aids motion planning for environments with obstacles

Computer scientists develop a method that allows humans to help complex robots build efficient solutions to “see” their environments and carry out tasks.

Just like us, robots can’t see through walls. Sometimes they need a little help to get where they’re going. 

Engineers at Rice University have developed a method that allows humans to help robots “see” their environments and carry out tasks. 

The strategy called Bayesian Learning IN the Dark -- BLIND, for short -- is a novel solution to the long-standing problem of motion planning for robots that work in environments where not everything is clearly visible all the time. 

The peer-reviewed study led by computer scientists Lydia Kavraki and Vaibhav Unhelkar and co-lead authors Carlos Quintero-Peña and Constantinos Chamzas of Rice’s George R. Brown School of Engineering was presented at the Institute of Electrical and Electronics Engineers’ International Conference on Robotics and Automation in late May.

The algorithm developed primarily by Quintero-Peña and Chamzas, both graduate students working with Kavraki, keeps a human in the loop to “augment robot perception and, importantly, prevent the execution of unsafe motion,” according to the study. 

The task set for this Fetch robot by Rice University computer scientists is made easier by their BLIND software, which allows for human intervention when the robot’s path is blocked by an obstacle. Keeping a human in the loop augments robot perception and prevents the execution of unsafe motion, according to the researchers. (Credit: Kavraki Lab/Rice University)
The task set for this Fetch robot by Rice University computer scientists is made easier by their BLIND software, which allows for human intervention when the robot’s path is blocked by an obstacle. Keeping a human in the loop augments robot perception and prevents the execution of unsafe motion, according to the researchers. Courtesy of the Kavraki Lab

To do so, they combined Bayesian inverse reinforcement learning (by which a system learns from continually updated information and experience) with established motion planning techniques to assist robots that have “high degrees of freedom” -- that is, a lot of moving parts.

To test BLIND, the Rice lab directed a Fetch robot, an articulated arm with seven joints, to grab a small cylinder from a table and move it to another, but in doing so it had to move past a barrier. 

“If you have more joints, instructions to the robot are complicated,” Quintero-Peña said. “If you’re directing a human, you can just say, ‘Lift up your hand.’”

But a robot’s programmers have to be specific about the movement of each joint at each point in its trajectory, especially when obstacles block the machine’s “view” of its target. 

Rather than programming a trajectory up front, BLIND inserts a human mid-process to refine the choreographed options -- or best guesses -- suggested by the robot’s algorithm. “BLIND allows us to take information in the human's head and compute our trajectories in this high-degree-of-freedom space,” Quintero-Peña said.

“We use a specific way of feedback called critique, basically a binary form of feedback where the human is given labels on pieces of the trajectory,” he said. 

Lydia Kavraki
Lydia Kavraki
Vaibhav Unhelkar
Vaibhav Unhelkar

These labels appear as connected green dots that represent possible paths. As BLIND steps from dot to dot, the human approves or rejects each movement to refine the path, avoiding obstacles as efficiently as possible. 

“It’s an easy interface for people to use, because we can say, ‘I like this’ or ‘I don’t like that,’ and the robot uses this information to plan,” Chamzas said. Once rewarded with an approved set of movements, the robot can carry out its task, he said.

“One of the most important things here is that human preferences are hard to describe with a mathematical formula,” Quintero-Peña said. “Our work simplifies human-robot relationships by incorporating human preferences. That’s how I think applications will get the most benefit from this work.”

“This work wonderfully exemplifies how a little, but targeted, human intervention can significantly enhance the capabilities of robots to execute complex tasks in environments where some parts are completely unknown to the robot but known to the human,” said Kavraki, a robotics pioneer whose resume includes advanced programming for NASA’s humanoid Robonaut aboard the International Space Station. 

Constantinos Chamzas
Constantinos Chamzas
Carlos Quintero-Peña
Carlos Quintero-Peña

“It shows how methods for human-robot interaction, the topic of research of my colleague Professor Unhelkar, and automated planning pioneered for years at my laboratory can blend to deliver reliable solutions that also respect human preferences.”

Rice undergraduate alumna Zhanyi Sun and Unhelkar, an assistant professor of computer science, are co-authors of the paper. Kavraki is the Noah Harding Professor of Computer Science and a professor of bioengineering, electrical and computer engineering and mechanical engineering, and director of the Ken Kennedy Institute.

The National Science Foundation (2008720, 1718487) and an NSF Graduate Research Fellowship Program grant (1842494) supported the research. 

Peer-reviewed research

Human-Guided Motion Planning in Partially Observable Environments: https://kavrakilab.org/publications/quintero-chamzas2022-blind.pdf.

Video

Computer scientists develop a method that allows humans to help complex robots build efficient solutions to “see” their environments and carry out tasks.

https://youtu.be/RbDDiApQhNo

Video courtesy of the Kavraki Lab/Rice University

Image for download

The task set for this Fetch robot by Rice University computer scientists is made easier by their BLIND software, which allows for human intervention when the robot’s path is blocked by an obstacle. Keeping a human in the loop augments robot perception and prevents the execution of unsafe motion, according to the researchers. (Credit: Kavraki Lab/Rice University)

https://news-network.rice.edu/news/files/2022/06/0613_ROBOT-1-web.jpg

The task set for this Fetch robot by Rice University computer scientists is made easier by their BLIND software, which allows for human intervention when the robot’s path is blocked by an obstacle. Keeping a human in the loop augments robot perception and prevents the execution of unsafe motion, according to the researchers. (Credit: Kavraki Lab/Rice University)

Related materials

Kavraki Lab: https://kavrakilab.org

Department of Computer Science: https://csweb.rice.edu

George R. Brown School of Engineering: https://engineering.rice.edu

About Rice

Located on a 300-acre forested campus in Houston, Rice University is consistently ranked among the nation’s top 20 universities by U.S. News & World Report. Rice has highly respected schools of Architecture, Business, Continuing Studies, Engineering, Humanities, Music, Natural Sciences and Social Sciences and is home to the Baker Institute for Public Policy. With 4,240 undergraduates and 3,972 graduate students, Rice’s undergraduate student-to-faculty ratio is just under 6-to-1. Its residential college system builds close-knit communities and lifelong friendships, just one reason why Rice is ranked No. 1 for lots of race/class interaction and No. 1 for quality of life by the Princeton Review. Rice is also rated as a best value among private universities by Kiplinger’s Personal Finance.

 

Body