Chapter 11

Haptic Teleoperation*

Abstract

Although huge advances have been accomplished in the research for fully autonomous UAVs, human teleoperated UAVs remain a good alternative due to the still unsolved issues and security constrains related to the fully autonomous ones. Still it is important to aid the operator in his tasks as much as possible, by providing him useful information and making the operation more intuitive. Even more, it is necessary to minimize the risk of human mistakes. In this chapter collision-free haptic teleoperation is accomplished using monocular vision for obstacle detection. A human pilot is able to command the UAV giving angle references from a haptic joystick in delta configuration. A simple controller was implemented to assist the human user in his task by automatically keeping the final effector at its center position. Potential field methodology is used to prevent collisions. Once an object is detected by the vision algorithm within certain security zone, a repulsive force is exerted by the haptic device to inform the pilot about the danger. Experimental results show the validity of the proposed approaches.

Keywords

Haptic teleoperation; Semi-autonomous flight; Quadcopter vehicle; Visual localization; Flight tests

Due to security constrains and high degree of difficulty to obtain solutions in hazardous scenarios, fully autonomous navigation continues to be an unsolved problem, where the lack of an overall solution for position estimation and the continuously changing conditions are some of the main challenges to be overcome. Henceforth, the notion of UAVs remotely operated by a human pilot remains a good alternative in several situations. An experienced human user may be able to safely operate a UAV if enough information is provided; however, it is always desirable to ease the task for the user and add robustness against human mistakes, simplifying the system and making it operable even to unexperienced users.

In this chapter we are interested in the case when no reliable position estimation is available, and the human pilot is completely in charge of the mission, manually controlling the drone with the help of an autopilot in attitude stabilization mode. Despite the lack of localization feedback, the pilot must be able to perform a wide range of missions. The key is to assist him with important information about the state of the system, the environmental conditions, and possible risks. Visual feedback from a wireless video camera streaming in real time is a powerful tool often employed for these purposes. However, the abuse of visual information from videos and real-time plots of the system's state may overwhelm an unexperienced pilot, confusing and even distracting him/her from the main goal. One interesting alternative to complement the feedback to the human user is using force feedback by means of a haptic interface [1]. Haptics is a tactile feedback technology which recreates the sense of touch by applying forces, vibrations, or motions to the user. Such technology provides the pilot with important information through the sense of forces, considerably improving flight experience and pilot's awareness. For example, the haptic device can emulate external forces affecting the UAV such as wind gusts or other external perturbations. Another interesting application consists in preventing the user from potential dangers such as obstacles [2], [3] or forbidding him to maneuver in unsafe zones where the drone can damage itself or go out of the communication range, and overall where the UAV may pose a risk for other humans.

We are particularly interested in investigating the use of haptic devices for safe UAV teleoperation by preventing the user from crashing the quadrotor and assisting him/her in obstacle avoidance. The most common techniques for haptic feedback to avoid collisions in UAVs are force feedback (for example, artificial force field) and stiffness feedback using a virtual spring. A comparative analysis of these techniques can be found in [4] and [5]. An adjustable feedback control strategy which accounts for the stiffness and damping effects in the haptic interface is proposed in [6].

11.1 Experimental Setup

Fixing an experimental prototype is a time-consuming and expensive task because of the high cost and fragility of the sensors and other electronic components embedded on an UAV. In addition, evaluating new algorithms results is a risky mission where the experimental platform is continuously in danger of falling and crashing due to bugs in the implementation or to a bad parameter tuning. Henceforth, an inexpensive navigation test-bed based on the Robot Operative System (ROS) was built to reduce the implementation time and economic cost. It is conformed by an inexpensive commercial quadrotor type AR.Drone, wirelessly connected to a ground computer where all the extra computations are executed using the middleware ROS, as illustrated in Fig. 11.1. Such a test-bed follows and extends the ideas presented in [7], and offers an excellent alternative for fast and safe validation of new proposed control strategies, state observers, and computer vision algorithms.

Image
Figure 11.1 Navigation testbed on ROS. Source: (Courtesy of Parrot images.)

11.1.1 Ground Station

A ground computer is responsible for recovering the drone's information, monitoring in real-time the system states, online parameter tuning, computing the control laws, state observers and computer vision algorithms, as well as giving high level commands and switching between the different operation modes, among others. The ground station is based on ROS which is a set of open-source libraries and tools that help to build robot applications, from drivers to the last developed algorithms, with powerful developer tools. It serves as a middleware to assist the different programs, so-called nodes, to interact and communicate among themselves and the operative system, by means of messages and services. As can be appreciated in Fig. 11.1, the main nodes utilized in this work are:

• AR.Drone driver node. This node is in charge of communicating with the AR.Drone via WiFi. Here information from the quadrotor embedded sensors, as well as video streams from the two attached cameras, is recovered at a rate of 200 Hz. Also this node controls the drone by sending the desired references to the internal autopilot, along with take-off and landing commands.

• Tum_ardrone state estimation node [7]. It applies the PTAM algorithm to estimate the quadrotor pose with respect to a visual scene.

• Haptic node. This node allows assisted teleoperation with force feedback using a haptic device.

• Plots node. The rqt_plot node provided with ROS facilitates the monitoring task, vital for most mobile robot applications. In it, it is possible to easily plot in real time any available variable.

• Parameter tuning node. Online parameter tuning is possible thanks to the rqt_reconfigure node included with ROS.

These nodes represent the basic operation of the test-bed. However, other nodes can be programmed and easily added to the system to expand or even substitute some of the described nodes, depending on the application and desired behavior of the system.

11.1.2 Monocular Vision Localization

The aerial vehicle position is obtained using computer vision and inertial data fused with an EKF algorithm. The vision algorithm, based on Parallel Tracking and Mapping (PTAM), estimates camera pose in an unstructured scene [79]. The algorithm executes in parallel the vision information for the tracking and mapping. It also constructs a sparse depth map (see Fig. 11.2) which is used in this work to estimate distance to frontal objects.

Image
Figure 11.2 UAV localization w.r.t. the sparse depth map (top left). Characteristic features on the image (top right). Horizontal projection of the sparse depth map obtained by PTAM (bottom).

Even if the PTAM algorithm is a good solution for pose estimation, it was conceived for mostly static and small scenes, and an absolute scale for the map is not provided. This could be considered as a drawback for MAV (Micro Aerial Vehicle) applications. Nevertheless, in [7] and [10] the authors proposed a nice solution fusing data measurements coming from an IMU, a camera, and ultrasound sensors, and using a scale estimator and an EKF. One advantage of this solution is that the vision approach can be obtained as open-source for ROS (Robot Operating System).

For this work the control algorithm code in [7] was replaced with a new one in order to easily implement and validate different control strategies and to help tuning the required gains. Finally, the localization algorithm was modified to recover the point-cloud of the depth map generated by the PTAM algorithm, and send it to another node to estimate the distance to potential collisions, as explained in Sect. 11.2. This way the operator can select online between the different programmed trajectories, control laws, and operation modes, as well as modify in real time any parameter for tuning.

11.1.3 Prototype

The AR.Drone 2.0 is a well-known commercial quadrotor (price ≈ 300 USD) which can be safely used close to people and is robust to crashes. It measures 53 × 52 cm and weighs 0.42 kg. It is equipped with three-axis gyroscopes and accelerometers, an ultrasound altimeter, an air pressure sensor, and a magnetic compass. It also provides video streams from two cameras, the first is looking downwards with a resolution of 320 × 240 pixels at a rate of 60 fps and is used to estimate the horizontal velocities with an optic flow algorithm while the second camera is looking forward with a resolution of 1080 × 720 at 30 fps and is used by the monocular vision algorithm. Fig. 11.3 introduces the main technical characteristics of the AR.Drone 2.0.

Image
Figure 11.3 AR.Drone 2.0 technical specifications. Source: (Courtesy of Parrot images.)

However, neither the software nor the hardware from the AR.Drone can be modified easily. It includes an internal onboard autopilot to control roll, pitch, altitude velocity, and yaw rotational speed (ϕ, θ, z˙Image, and ψ˙Image), according to external references. These references are considered as control inputs and are computed and sent at a frequency of 100 Hz. All sensor measurements are sent to a ground station at a frequency of 200 Hz, where the vision localization and the state estimation algorithms are calculated in real-time on ROS.

11.1.4 Haptic Interface

The low-cost Novint Falcon haptic device is used to remotely control the UAV while providing the pilot with force feedback. It is composed of a three-degree-of-freedom robot arm in delta configuration, with a touch workspace of about 10×10×10 cmImage and 400 dpi of resolution (see Fig. 11.4). This haptic device is capable of exerting forces up to 8.9 N in the three axes. The Novint Falcon is connected to the ground station via USB and is controlled with the help of ROS. The position of the final effector and information from the bottoms are recovered and used to control the drone's behavior, while information from the computer vision algorithm is used to determine the exerted forces, improving the flying experience and awareness of the pilot. More details about the control of the UAV from the arm position and the generation of the forces applied by the haptic device are given in Sect. 11.3.

Image
Figure 11.4 Novint Falcon, delta configuration haptic device.

11.2 Collision Avoidance

The continuous change in the operation conditions produced by the environmental factors and the interactions with other unknown agents pose a mayor challenge in the pursuit of autonomous and semi-autonomous navigation of mobile robots, with the added difficulty of the payload limitation for UAVs. This drives in the need of real-time perception of the unknown environment and an appropriate reaction, both being a major concern in the design of autonomous UAVs. In the present work, we are interested in the development of an effective strategy for detection and avoidance of collisions in UAV navigation. Particularly, we are interested in the use of the already available information from the embedded sensors in the inexpensive commercial quadrotor AR.Drone 2.0.

We intend to employ computer vision algorithms using information from the frontal camera to detect feature points on the image and use them to estimate the distance to possible obstacles. This is accomplished taking advantage of the sparse depth map generated by the PTAM localization algorithm. Since only a sparse map is available, some possible obstacles can be ignored if not enough visible characteristic points are present. In order to ensure collision avoidance and safe operation of the system, only the horizontal projection of the point-cloud is considered, as depicted in Fig. 11.2. This means that the height of the obstacles is ignored, and obstacle evasion is only possible in the horizontal plane. This approach, although conservative, is still useful for several missions where obstacles such as walls or columns are present. However, it still results in a noisy depth-map, and special attention should be taken for obstacles presenting low-texture surfaces (flat one-color walls).

As stated, distance to possible obstacles is estimated from the horizontal projection of the point-cloud. It is composed of a set P of n characteristic points on the image pi(xi,yi,zi)Image, i=1,,nImage, computed by the PTAM algorithm. We define the estimated distance to frontal obstacles dyImage as the average depth, w.r.t. the position of the quadrotor along y, of the η points inside certain lateral range ε from the lateral position of the quadrotor x, namely

dy=y1ηyiΩyyi,Ωy={pi(xi,yi,zi)P|xi[xε,x+ε]}.

Image (11.1)

We can define the estimated distance to lateral obstacles dxImage in the same way.

In order to avoid collisions, a potential field is applied such that if distance diImage (i=x,yImage) falls bellow certain safe distance dsImage, then a repulsive force FrepiImage will be exerted as follows (see Fig. 11.5):

Frepi={0ifdi>ds,krepi(1di1ds)(1di2)ifdids.

Image (11.2)

Image
Figure 11.5 Repulsive force scheme. Source: (Courtesy of Parrot images.)

Note that lateral obstacle detection becomes much more challenging since only a frontal camera is used for this purpose.

11.3 Haptic Teleoperation

In this work we successfully applied the Novint Falcon haptic device to assist in teleoperation and to prevent the user from crashing the quadrotor against an obstacle. To do so, the position of the final effector of the haptic device (xh,yh,zhImage) provides a linear relation between the desired roll and pitch angles (ϕd,θdImage) and the desired altitude velocity (z˙dImage). It is given as

ϕd=khy(yhoy),θd=khx(xhox),z˙h=khz(zhoz),

Image (11.3)

where khx,khy,khz,ox,oyImage, and ozImage are suitable gains and offsets. This strategy is useful for quadrotors with an internal orientation and altitude controllers. Observe from (11.3) that to keep the quadrotor hovering in a desired position, the user should keep the haptic device at the center of its workspace. In order to assist the user in this task and simplify the manual control of the UAV, a proportional derivative controller is applied to regulate the haptic's final effector position at the origin if no force from the operator is exerted. Similarly, a repulsive force is applied to the haptic device once the quadrotor approaches an obstacle. Then the feedback forces applied to the haptic device are defined as

Fx={kphxxkdhxx˙ifdx>ds,khrepx(1dx1ds)(1dx2)ifdxds,Fy={kphyykdhyy˙ifdy>ds,khrepy(1dy1ds)(1dy2)ifdyds,Fz=kphzzkdhzz˙,

Image (11.4)

with the control gains kphx,kphy,kphz,kdhx,kdhy,kdhzImage, khrepiR+Image.

11.4 Real-Time Experiments

Extensive experiments were executed to validate the proposed algorithms. The parameters used were tuned by trial-and-error and are presented in Table 11.1.

Table 11.1

Parameters

ε[m] k rep ds[m] khrepx,y Image
0.5 4 5 2

Image

k hx k hy k hz k phx,y,z k dhx,y,z
12.7 14.3 18.2 100 500

Image

Some experiments can be found on video at

https://www.youtube.com/watch?v=fr0dTSm6Go8

For the teleoperation scenario a human pilot controls the position of the quadrotor through a haptic device. The practical goal here is to use the haptic device to feed back information from the vehicle to the pilot and prevent him from colliding, via opposite forces in the haptic device when the quadrotor goes out of a safety zone. This is useful, for example, in wall-inspection missions where the operator's visibility is limited.

Collision-free haptic teleoperation is studied through Figs. 11.611.9. In these flight tests the user flew the UAV in semi-autonomous mode using a haptic device. Here only the orientation is in autonomous mode. The user attempts to deliberately crash the vehicle against a frontal wall a few times. The first tries were realized slowly, but the last was done at high speed, see Fig. 11.6.

Image
Figure 11.6 xy performance when the user tries to crash the quadcopter into the wall.
Image
Figure 11.7 y and x responses (top and bottom, respectively).
Image
Figure 11.8 Haptic feedback forces.
Image
Figure 11.9 UAV control inputs.

The position responses in the x and y states of the UAV through the experiments are depicted in Fig. 11.7. Observe in this figure (for y axis) at times 6, 9, and 13.5 s, how the reactive collision avoidance algorithm prevented the user from driving the quadrotor to a dangerous area too close to the wall. Furthermore, at the time of 16 s the pilot deliberately tried to crash the UAV in a fast maneuver toward the wall, and even though it got to touch the wall it never crashed thanks to the good performance of the proposed algorithm even in this extreme case. For the x position, it remains quasi-constant since no lateral obstacles were present.

Finally, observe the feedback force applied to the haptic device and the control inputs send to the UAV, respectively shown in Figs. 11.8 and 11.9. When approaching the wall, the haptic device exerts a repulsive force alerting the human operator of the danger and even forbidding him from crashing the quadrotor, see the 16th second in Figs. 11.8 and 11.9.

It is important to point out that the obtained results are quite satisfactory, taking into account that only a monocular camera is used to locate the quadrotor and to detect collisions, instead of using an expensive motion capture system and/or extra range sensors as did other teams. Henceforth, the obtained results can be easily reproduced in outdoor flight tests.

11.5 Discussion

In this chapter we have studied the use of a haptic device for collision-free UAV teleoperation, by using the force feedback to drive the human user off from potential obstacles. The use of remote operation of UAVs by a human pilot remains a good alternative to the not always feasible completely autonomous navigation. The use of force feedback through a haptic device to assist the human operator allows improving the flight experience for the pilot, as well as increasing the safety of the mission. In this case we have proposed using reactive collision avoidance by means of potential fields applied to the haptic device. This way the human user is able to feel forces when he is trying to drive the drone into a dangerous area.

Force feedback is also applied to ease the piloting task by automatically directing the final effector of the haptic device to its center, which also implies the attitude stabilization of the UAV.

We also presented an obstacle detection strategy based on computer vision and a frontal camera. Such methodology allows us to detect possible collisions from only visual information. However, as only a sparse depth-map projected to the horizontal plane is used, the height of the obstacles remains unknown. Still this is useful for safe operation in several scenarios, for example, to avoid crashes against walls or columns in indoor missions where pilot's visibility is limited.

References

[1] P. Stegnano, M. Basile, H. Bthoff, A. Franchi, A semi-autonomous UAV platform for indoor remote operation with visual and haptic feedback, In: International Conference on Robotics and Automation (ICRA). Hong Kong, China. IEEE; 2014.

[2] H. Rifa, M. Hua, T. Hamel, P. Morin, Haptic-based bilateral teleoperation of underactuated unmanned aerial vehicles, In: Proceedings of the 18th World Congress of the International Federation of Automatic Control (IFAC). Milano, Italy. IEEE; 2011.

[3] S. Alaimo, L. Pollini, J. Bresciani, H. Bthoff, Evaluation of direct haptic aiding in an obstacle avoidance task for tele-operated systems, In: Proceedings of the 18th World Congress of the International Federation of Automatic Control (IFAC). Milano, Italy. IEEE; 2011.

[4] T. Lam, M. Mulder, M. van Paasen, Haptic feedback for UAV tele-operation – force offset and spring load modification, In: International Conference on Systems, Man, and Cybernetics. Taipei, Taiwan. IEEE; 2006.

[5] A. Brandt, M. Colton, Haptic collision avoidance for a remotely operated quadrotor UAV in indoor environments, In: International Conference on Systems, Man, and Cybernetics. Istanbul, Turkey. IEEE; 2010.

[6] S. Fu, H. Saeidi, E. Sand, B. Sadrfaidpour, J. Rodriguez, Y. Wang, J. Wagner, A haptic interface with adjustable feedback for unmanned aerial vehicles (UAVs) – model, control and test, In: American Control Conference (ACC). Boston, MA, USA. IEEE; 2016.

[7] J. Engel, J. Sturm, D. Cremers, Camera-based navigation of a low-cost quadrocopter, In: Intl. Conf. on Intelligent Robot Systems (IROS). Vilamora, Algarve, Portugal. IEEE; 2012:2815–2821.

[8] G. Klein, D. Murray, Parallel tracking and mapping for small AR workspaces, In: Intl. Symposium on Mixed and Augmented Reality (ISMAR). Nara, Japan. IEEE; 2007:225–234.

[9] S. Weiss, D. Scaramuzza, R. Siegwart, Monocular-SLAM-based navigation for autonomous micro helicopters in GPS-denied environments, Journal of Field Robotics 2011;28:854–874.

[10] J. Engel, J. Sturm, D. Cremers, Scale-aware navigation of a low-cost quadrocopter with a monocular camera, Robotics and Autonomous Systems 2014;62:1646–1656.


*  “This chapter was developed in collaboration with D. Mercado and R. Lozano from Heudiasyc lab, UMR 7253 – Université de Technologie de Compègne, France.”

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset