You are on page 1of 5

Vision-Based Grasp Planning System for Dexterous Hands

Jiting Li, Wenkui Su, Yuru Zhang, Weidong Guo Robotics Institute, Beihang University, Beijing, China, 100083 E-mail lijiting@buaa.edu.cn E-mail yuru@public.bta.net.cn

Abstract
This paper introduces a new approach of grasp planning for robotic dexterous hands. Master-slave control strategy is adopted for integrating human intelligence into the planning system in which human hand directly controls the robot hand interactively. The human-machine interface is a computer vision system with two CCD cameras. The fingertips of the human hand are marked with markers of different geometry, thus can be identified by the computer vision system. When the human fingers move, the vision system firstly captures the image of the markers and performs feature-based identification. Then the images of the two cameras are matched and their image centers are used to calculate the positions of the fingertips. After this the human fingertip positions are mapped onto those of the dexterous hand in its palm frame. The human operator observes the motions of the robot fingers and decides the next step of motion. Through this procedure the human hand can guide the robot fingers to the target position of grasping and manipulation. To verify the validity of the proposed grasp planning approach, two tasks are demonstrated in a virtual environment, one is pressing a button with a single finger and the other is moving the thumb and index fingers. The tasks are simulated in real time and performed successfully. Key words: computer vision, grasp planning, multi-fingered dexterous hand, master-slave control

Introduction

As a key issue of the dexterous hand, the grasp planning is broadly investigated [1-6]. In recent years, using master-slave and telemanipulation techniques to plan the grasp for multifingered hands extracts the extended interests [2-5]. The main idea is that human hand directly joins in the grasp loop so as to integrate the human experiences and intelligence to ease the grasp decisions. Its usual in the master-slave system that human hand executes the master grasp, the human-machine interface measures the human hand motion, after the mapping from human hand onto robot hand, the robot hand executes the slave grasp. Thus the complexity of

grasp planning is greatly reduced. In the master-slave system, the human-machine interface and motion mapping are two important issues that greatly influence the performance of the grasping, such as accuracy and speed. At present, the datagloves are often used as the interface to measure the human hand motion[1-5]. Apparently it is convenient for measuring the joint angles. However it cannot satisfy the expected precision when the fingertip positions are accurately needed. Therefore the computer vision is used to calibrate the dataglove combining with the artificial neural net technique in telemanipulation system for DLR Hand [2]. Choice of the interface is dependent on the motion parameters to be measured and the space where motion are mapped. Similarly the motion mapping can be done in joint space or the Cartesian space, which depends on the motion parameters to be mapped. Our goal is to set up a master-slave grasp system, in which the human hand can precisely control the fingertip positions of the dexterous hand in real time by means of the suitable human-machine interface and feasible master-slave motion-mapping algorithm. As stated above, dataglove cannot satisfy the precision requirement of the positions. Therefore the stereo computer vision is adopted in our system to measure the positions of the human fingertips. To meet the real time need for master-slave grasp, the master environment and fingertip markers are designed as simple as possible. Thus the image processing time is greatly decreased. The other key issue solved in this paper is the master-slave motion mapping. It is required to make the master-slave manipulation simple and intuitive. Aiming at this goal, we firstly establish the corresponding relation between the master and slave hand palms. Then the linear incremental mapping of the fingertip positions is made in Cartesian space in the palm frames.

System structure

The master-slave grasp system consists of human hand, a virtual robot hand and a human-machine interface, as shown in figure 1. As the master hand, the human hand is marked with markers of different geometries on its

fingertips. The slave hand is a dexterous hand, named BH4 Hand, which is developed by our group at Robotics Institute of Beihang University, China. The virtual model is shown as figure 2. The human-machine interface is a computer vision system with two CCD cameras to acquire and identify the movement of the human fingertips. The calculated human fingertip positions are then mapped onto those of the dexterous hand in its palm frame and the dexterous fingers move. The human operator observes the motions of the robot fingers and decides the next step of motion. Through this procedure the human hand can guide the robot fingers to the target position of grasping.
feedback active

cameras are firstly calibrated. The main considerations of this section are marking and identifying the human fingertips.
The identifying is feature based and divided into two steps, position identifying and shape identifying. Calibrating of camera Identifying features Marking the human fingertips Position identifying Shape Identifying

master Motion of human hand Vision of human

Feature matching slave Motion of robot hand

Calculating the 3D coordinates of markers

Figure3. Identification for computer vision 3.1 Marking the human fingertips The markers are used to calculate the fingertip positions, so they should not be too big. They should also be simple, regular and with remarkable different features for different fingers to make the image processing and identification easy and efficient. We choose several special geometries as the markers, shown in figure 4.

Human-machine interface Computer vision Identification of human hand Motion mapping

Figure1. Vision-Based Master-Slave Grasp Planning System for Dexterous Hands

Figure4. Markers on the fingertips 3.2 Identification for marker positions The task for position identification is to determine the positions of the markers in the whole image. As shown in figure 4, markers are isolated black areas in the image and surrounded in white area. So the identified positions satisfy the following conditions: 1) The centers are black. 2) Their eight-neighbored areas are black. 3) They are surrounded in a larger white area. The result is shown in figure 5. The identified markers are framed with squares.

Figure2. Virtual model of BH4 Hands

Identification of the human hand motion

The working procedure of the computer vision system is shown as figure 3. Its well known that the

Experiments for master-slave grasping


The experimental system is shown in figure 7.

Figure5. Result of position identification 3.3 Shape identification for markers After the position identification, the target of this step is to match the images of the left and right cameras to distinguish the different fingers. The edge numbers of the geometries are chosen as the feature to be matched. If the two images have the same number, they are considered to be the images of the same finger. And then their image centers are used to calculate the position of the corresponding fingertip. The result of shape identification is shown in figure 6.

Figure7. Experimental system


Start

Open left/right eye image

Search the markers globally

Calculate image centers of the markers

No Are both the images identified?

Figure6. Result of shape identification


Yes

Motion mapping between master and slave hands

Calculate the coordinates of fingertips in world frame

Assume the palm is fixed and the motion mapping is in the Cartesian Space. We define riR = k i riH , where riH and riR are the increment of the positions of the fingertips of the human and robot hands respectively in their palm frames.
riR = [x iR y iR z iR ]T . The symbol i stands for the every finger, and the symbols R and H for the robot and human hands. The symbol riH = [xiH y iH z iH ]T

No

Have the fingertips been initiated?

Yes Calculate Pi of the fingertips

Transmit Pi to the virtual fingers

k i is the mapping factor for the master/slave motion. It is defined as the length multiplier of every robot finger to the corresponding human finger.

End

Figure 8 Diagram of the system

The virtual prototype of BH4 dexterous hand is modeled in OpenGL image environment. The software is written in VC++. The processing time for a single image is less than 20 ms with the windows 2000 operating system and the CPU of PIV 2.0G. The average measuring error of position of computer vision is 1mm and the maximal error is less than 2mm. The procedure is illuminated in figure 8. We choose two operations that human hand often operates in daily life to test the presented method. One is pressing a button with a single finger and the other is pinching with thumb and index fingers. As the results shown, the tasks are simulated in real time and the first task is performed successfully when human fingers move slowly. For the second task the two fingers can be controlled to move by that of the operator respectively, but sometimes the fingertip positions of the thumb finger are out of its workspace. Therefore there are some important problems remaining to be solved. The algorithm of image processing is expected being improved to increase the correctness and grasp speed. And also the mapping method is to be improved.

Figure10. Moving the thumb and index fingers

Conclusion

The positions of robot fingertips are decided by that of the human fingertips which are measured by the computer vision in the presented grasp planning system. The computer vision is proven capable of measuring the motion of the human fingers in real time and having higher precision compared with the dataglove. The integration of the human intelligence and experiences not only reduces the complexity of grasp planning, but also make the system capable of adapting to the unstructured and unknown environment. It also provides the possibility for the robot hand to grasp the arbitrary shaped objects to release the human from the dangerous, heavy and tedious work. However there are some issues that remain to be improved, especially to find out the more reliable algorithm of computer vision to increase the rate of the correct identification and to reach the normal speed of the human hand movement. In addition, the vision is expected to combine with the other sensors to make the grasp more efficient.

Acknowledgement This project is supported by the National Natural Science Foundation of China (59985001) and the Doctoral Grant of the Education Ministry of China (2000000605). References
[1] Sing Bing Kang and Katsushi Ikeuchi, Toward Automatic Robot Instruction from Perception Mapping Human Grasps to Manipulator Grasps, IEEE Trans. on Robotics and Automation, pp. 81-95, 13(1),1997. [2] M. Fischer, P. van der Smagt, and G.Hirzinger, Learning Techniques in a Dataglove Based Telemanipulation System for the DLR Hand, Proc. 1998 IEEE Intl. Conf. on Robotics and Automation, pp. 1603-1608,Leuven, Belgium, 1998. [3] Haruhisa Kawasaki, Kanji Nakayama, Tetsuya Mouri, and Satoshi Ito, Virtual Teaching Based on Hand

Figure9. Pressing the button

Manipulability for Multi-Fingered Robots, Proc. 2001 IEEE Intl. Conf. on Robotics and Automation, pp. 1388-1393,Korea, 2001. [4] Bruno M. Jau, Dexterous Telemanipulation with Four Fingered Hand System, Proc. 1995 IEEE Intl. Conf. on Robotics and Automation, pp. 338-343,1995 [5] Michael L. Turner, et al., Development and Testing of a Telemanipulation System with Arm and Hand

Motion, Proc. of ASME IMECE DSC-Symposium on Haptic Interfaces, pp. 1-8,2000. [6] Danica Kragic, Andrew T. Miller and Peter K. Allen, RealTime Tracking Meets Online Grasp Planning, Proc. 2001 IEEE Intl. Conf. on Robotics and Automation, pp. 2460-2465,Korea, 2001.

You might also like