Blended Shared Control


Posted on July 1, 2018 at 10:23

Tag: Research

Github Project

Paper

Blended Shared Control on the Turtlebot 2.0

Investigating whether a navigation policy improves a human-in-the-loop controller with simulated drift and input delay.

Turtlebot

One of my first projects with robotics was implementing a blended shared controller on the Turtlebot. It was my introduction to ROS, model-based control, and the design of a proper research experiment that can be used for publication. This was part of Dr. Velin Demitrov’s thesis: “Model-Based Robot Control in Human-in-the-Loop Cyber Physical Systems.” 1 Our goal was to investigate whether the performance of an autonomous agent improves or degenerates when human feedback is introduced in the control loop.

Experiment

Figure 1: Environment setup of the doorway navigation task.

We decided to use the approach described by Enes’ paper 2 where we dynamically change the human and autonomous agents’ blending parameter. The specific formulation changes the blending parameter as a function of distance to the goal and difference in operator input and optimal trajectory as calculated by the navigation stack on the robot. This formulation is beneficial because the algebraic blending is intuitively understood while at the same time can be extended to include application specific optimization. In addition, the presented architecture enables continuous and dynamic control between human and robot depending on the type of shared control modality that is selected.

In order to validate the blended shared control implementation,we conducted two different studies with a Turtlebot in a real environment. The first study, with 9 college aged volunteers, asked users to complete a navigation task to a goal with one 90 degree turn required. The second study was larger, utilizing 12 college aged volunteers, asked users to complete a different navigation task consisting of navigating around an obstacle and traversing a doorway, as seen in Fig. 1. The goal of the experiments is to quantify the degradation of user performance as the time delay and drift are varied. Drift and time delay scenarios were selected as the basis for this test for a few reasons. Prior work in BSC for semi-autonomous wheelchairs 3 4 and mobile robots 5 do not consider the effects of drift or time delay; common challenges we have encountered in assistive robots we have implemented.

Utilizing the Robot Operating System and a Turtlebot, the SLAM algorithm and blending controller needed to test the BSC were prototyped. Users are asked to navigate the robot around an obstacle and through an open doorway with the blended shared controller running. Four different scenarios were tested: a control run with no shared control, a baseline shared control run with no disturbances, and time-delay and drift runs, both with shared control. We tested time-delay runs with delays of 0.5, 1.0, and 2.0 seconds and drift runs with drifts of 0.1, 0.3, and 0.5 .

Alpha

Figure 2: Blending parameter changing over the course of an experiment.

Figure 2 shows the aggregated plot of the change in the blending parameter from the first study. The control authority of the whole system shifts dynamically between the user and robot from the beginning to the end of the navigation task. At the beginning, is near 0.9 indicating the system is biased towards the user input with slight guiding from the robot. As the target location is approached, control authority starts shifting to the robot in order to help guide the user towards the target location. This is the expected behavior of the blending parameter based on 1 where asymptotically approaches 0.5, indicating the robot and user equally share control of the system.

We ran a Wilcoxon signed rank test to determine if there is a statistically significant difference in the time and distance it takes to complete the task under the shared control and time delay scenarios. The results indicate that users took a longer time to complete the task with time delay for all delay values and with drift for all drift values except for 0.1 . This indicates that 0.1 was too small a value to cause a significant difference in completion time.

The results also show that the distance traveled is statistically similar for the 0.5 and 1.0 second delay parameters but not for the 2.0 second delay. This indicates that the distance metric is not affected as much by the time metric until the delay becomes quite large. For the drift scenarios however, the distance to complete is not statistically significant from the normal shared control scenario. This result indicates that the data is inconclusive and would potentially require more runs to reject our null hypothesis. The constraints posed by the specific path needed to complete the task likely contribute to this result as well.

Future work involves testing the blended shared control algorithm on an actual noisy robot (a Clearpath Jackal we have with a dysfunctional odometry sensor) and using reinforcement learning for tuning the shared control parameter.

  1. Model-Based Robot Control in Human-in-the-Loop Cyber Physical Systems - Northeastern University  2

  2. Blended Shared Control of Zermelo’s navigation problem - IEEE ACC 

  3. Using machine learning to blend human and robot controls - IEEE ICORR 

  4. Human-in-the-Loop Optimization of Shared Autonomy in Assistive Robotics - IEEE RAS 

  5. Blending human and robot inputs for sliding scale autonomy - IEEE ROMAN