You are on page 1of 10

1

Exploring Human Hand Capabilities into Embedded


Multifingered Object Manipulation
Honghai Liu, Senior Member, IEEE

Abstract—This paper provides a comprehensive computational due to the lack of appropriate sensory systems, unsolved
account of hand-centred research, which is principles, method- human-robot interaction problems, mysterious neuroscience
ologies and practical issues behind human hands, robot hands, issues, etc. In recent decades, due to significant innovations
rehabilitation hands, prosthetic hands and their applications.
In order to help readers understand hand-centred research, in multifingered robot hands and mature algorithms in robot
this paper presents recent scientific findings and technologies planning, priority has been given to multifingered robot ob-
including human hand analysis and synthesis, hand motion ject manipulation. As the hardware of multifingered robotic
capture, recognition algorithms and applications, it serves the systems has developed, there have been parallel advances in
purpose of how to transfer human hand manipulation skills to three most important engineering challenges for the robotics
related hand-centred applications in a computational context.
The concluding discussion assesses the progress thus far and community, namely, the optimal manipulation synthesis prob-
outlines some research challenges and future directions, and lem, the real-time grasping force optimisation problem, and
solution to which is essential to achieve the goals of human hand coordinated manipulation with finger gaiting [6], [7]. Stable
manipulation skill transfer. It is expected that the survey will multifingered robot manipulation is determined by engineering
also provide profound insights into an in-depth understanding of criteria, that is, force closures. Computer scientists have made
realtime hand-centred algorithms, human perception-action and
potential hand-centred healthcare solutions. significant advances in computational intelligence for robot
manipulation. Gomez et al. [8] developed an adaptive learning
Index Terms—Motion recognition, human hand modeling, mo- mechanism, which allows a tendon driven robotic hand to
tion capturing, multifingered robot manipulation and cognitive
robotics. explore its own movement possibilities, to interact with objects
of different shapes, sizes and materials and learn how to
grasp and manipulate them. It is recognised that the state
I. I NTRODUCTION of the art in hardware aspects of artificial hands confirms
that hardware platforms have capabilities of accommodating
I T is evident that the dexterity and multi-purposed ma-
nipulation property of the human hand inspires cross-
disciplinary research and application of robotics and artificial
advanced computational models for adapting the human hand
manipulation skills though it is still a challenge to operate
intelligence. It has been driven by the dream of developing an them in a real-time context [9]–[11].
artificial hand with the human hand’s properties [1]. However, However, the manipulation systems of the robotic hands are
five decades on, it is clear that the priority of the dream coming hardcoded to handle specific objects by their corresponding
true has been given to computational hand models [2], [3]. It robot hands. It is evident that robot hand control and optimi-
is the primary challenge of how to transfer human hand capa- sation problems are very difficult to resolve in mathematical
bilities into multi-fingered object manipulation, especially, in terms, however humans can solve their hand manipulation
a realtime context. It underlies advanced artificial intelligence tasks easily using skills and experiences. Object manipulation
robotics and its related disciplines and applications. algorithms are needed that have human-like manipulation
Recent innovations in motor technology and robotics have capabilities, are independent of robot hand hardware and are
achieved impressive results in the hardware of robotic hands conducted in realtime term. Hence, the main challenge that
such as the Southampton Hand, DLR hands, Robonout hand, researchers now face is how to enable robot hands to use what
Barret hand, DEKA hand, Emolution hand, Shadow hand can be learned from human hands, to manipulate objects, with
and iLimb hand [4]. Especially, the ACT hand [5] has not the same degree of skill and delicacy as human hands in a
only the same kinematics but also the similar anatomical reasonably fast manner. For instance, given the locations and
structure with the human hand, providing a good start for shapes of a cup by off-the-shelf image processing algorithms,
the new generation of anatomical robotic hands. However, a robotic hand is required, inspired by human hand biological
anatomically correct robotic hand is still a far way to go capabilities, to reach and manipulate the cup by continuously
responding with appropriate shape configuration and force
H. Liu is with Intelligent Systems and Biomedical Robotics Group, School distribution among the fingers and palm.
of Creative Technologies, University of Portsmouth, England, PO1 2DJ UK. Transferring human manipulation skills to artificial hands
Email: honghai.liu@port.ac.uk
The author would like to acknowledge the projects under grant No involves modeling and understanding the human hand mo-
EP/G041377/1 funded by Engineering and Physical Science Research Council, tion capabilities, and advanced multifingered manipulation
grant No IJP08/R2 by the Royal Society. planning and control systems, manipulation abilities, sensory
Copyright (c) 2009 IEEE. Personal use of this material is permitted.
However, permission to use this material for any other purposes must be perception and motion algorithms of artificial hand systems.
obtained from the IEEE by sending a request to pubs-permissions@ieee.org. It connects user manipulation commands/intentions to the
2

motion of a highly articulated and constrained human hand,


containing 27 bones, giving it roughly 27 degrees of freedom
or ways of moving. Chella et. al. [12] confirmed that prior
knowledge should be introduced in order to achieve fast and
reusable learning in behavioural features, and integrate it in
the overall knowledge base of a system. It serves the pur-
pose of fulfilling requirements such as reusability, scalability,
explainability and software architecture. Carrozza et. al. [13]
designed artificial hand systems for dexterous manipulation;
they showed that human-like functionality could be achieved
even if the structure of the system is not completely biologi-
cally inspired. Learning from human hand motions is preferred
for human-robot skill transfer in that, unlike teleoperation-
related methods, it provides non-contact skill transfer from
human motions to robot motions by a paradigm that can
endow artefacts with the ability for skill growth and life-long
adaptation without detailed programming [14]. In principle,
not only does the provide a natural, user-friendly means of
implicitly programming the robot, but it also makes the learn-
ing problem become significantly more tractable by separating Fig. 1. A schematic of exploring human hand capabilities into multifingered
redundancies from important characteristics of a task. artificial manipulation
This paper presents a survey of most recent work in the
procedure of the human hand skill transfer to artificial hand
systems, the detailed modules is illustrated in Fig. 1, namely, due to the lack of appropriate sensor systems, unsolved
hand analysis and synthesis, hand motion capture, hand skill problems involving with the human-robot interaction (HRI)
transfer and hand-centred applications. Though a variety of and neuroscience, etc. Though artificial hands may perform
problems of the multifingered artificial hand systems have stronger and faster grasps than the human hand, the high
been addressed, it is evident that the research communities and dimensionality makes it hard to embedded program and ma-
practitioners require a unified version of recent improvements nipulate human-like robotic hand for dexterous grasps as
of exploring human hand capabilities into multi-fingered robot humans do.
manipulation with an emphasis on computational models of
hand motion recognition. Not only does this paper provide
a comprehensive computational account of hand-centred re- A. The Human Hand
search, but it also assesses the progress thus far and outlines The human hand has complex kinematics and highly artic-
some research challenges and future directions, and solution to ulated mechanism with 27 degrees of freedom (DOF), i.e., 4
which is essential to achieve the goals of human hand manip- in each finger, 3 for extension and flexion and 1 for abduction
ulation skill transfer. Though computational issues related to and adduction; the thumb is more complicated and has 5
neuroscience, health sciences and developmental robotics are DOF, and 6 DOF for the rotation and translation of the wrist.
not addressed in this paper in order to focus on the computa- Natural anatomical restrictions subject to the muscle-tendon
tional aspects of hand manipulation skills transfer to artificial controlling mechanism enable the human hand capability of
hands. The remainder of this paper is organized as follows: super dexterity and powerfully usage of a wide range of tools.
Section II presents the human hand anatomy and summarizes Of the 1000 or so different functions we perform daily with
hand motion capturing devices; it is also devoted to hand the 19 bones in each hand as shown in Fig. 2. The wrist itself
gesture capturing based on glove, vision and electromyography contains eight small bones called carpals. The carpals join
(EMG). Section III discusses various recognition methods, with the two forearm bones, the radius and ulna, forming
with particular emphasis on Hidden Markov Models (HMM), the wrist joint. Further into the palm, the carpals connect
Finite State Machine (FSM), and Connectionist approach. to the metacarpals. There are five metacarpals forming the
Section IV overviews the hand-centred application, and the palm of the hand. One metacarpal connects to each finger and
last section concludes this paper and indicates some existing thumb. Small bone shafts called phalanges line up to form each
challenges and future research possibilities. finger and thumb. The main knuckle joints are formed by the
connections of the phalanges to the metacarpals. These joints
are called the metacarpophalangeal joints (MCP joints). The
II. T HE H UMAN H AND M ODELING
MCP joints work like a hinge when you bend and straighten
Every day human hands perform a huge amount of dex- your fingers and thumb. The three phalanges in each finger
terous grasps to fetch, move and use different tools instinctly are separated by two joints, called interphalangeal joints (IP
due to the innate sense for goal attainment and sensorimotor joints). The one closest to the MCP joint (knuckle) is called
control. However, everyday tasks in human environments are the proximal IP joint (PIP joint). The joint near the end of
relatively difficult for multi-fingered artificial hands mainly the finger is called the distal IP joint (DIP joint). The joints
3

Fig. 3. Hand gesture capturing based on surface markers [17]

used to capture wrist, metacarpal arch, fingers and thumb


movements as shown in Fig. 3. Note that, however, hand
capturing systems in this category are inconvenient to use
since they have to always cling to hands and time-consuming
calibration.
2) Vision Based Capturing: Computer vision-based human
motion capture intensively reviews related capturing devices
and techniques before 2001 [18], though its overview is
Fig. 2. Anatomical structure of the human hand focused on a taxonomy of system functionalities broken down
into initialisation, tracking, pose estimation and recognition.
Priority of human motion capturing, especially hand motion,
of the hand, fingers, and thumb are covered on the ends with is moved to capturing devices based on multi-cameras, 3-
articular cartilage to absorb shock and provide an extremely D cameras and innovative devices such as kinetic [19]. It
smooth surface to facilitate motion. is evident that the latter two types of capturing methods
The human hand is most profound system having capability suffer from low precision. Multiple cameras and depth cam-
of performing more complicated, dexterous tasks than any ex- eras have been arranged for developing motion recognition
isting systems does. In terms of artificial intelligence robotics, algorithms subject to individual applications, public available
all existing artificial hands can only mimic how the human multi-camera databases are summarized in [19], for instance,
hand does in a rough grade, far from reflecting its inherent HumanEva-I dataset contains 7 calibrated video sequences
motion capabilities, let alone to in-depth understanding of its that are synchronized with 3D body poses obtained from
connection to the human brian. Though there is a substantially a motion capture system. Additionally, the Kinect’s depth
growing interest in related research in neuroscience, health sensor consists of an infrared laser projector combined with a
science and cognitive robotics [15]. monochrome CMOS sensor. It captures video data in 3D under
a variety of ambient light conditions. Since the human hand
B. Hand Motion Capturing is highly articulated and constrained, vision based capturing
Sensory information such as hand position, angle, force and devices are used to achieve both 2D and 3D hand gesture
their related parameters are required available to build up a models.
computational model of the human hand, further transfer the 3) Haptics Capturing: Haptics or touch feedback refers to
model to hand-centred applications. That complexity and dex- provide touch feedback to the human hands or artificial hands
terity of the human hand and its involvement in various tools in this paper. The data glove and camera can get sufficient
handling, makes it challenging to handle tradeoffs between information of human hand joint angles or positions, but they
sufficiently acquired data representing hand manipulation ca- are incapable of capturing the force of human manipulation.
pabilities and precision of hand computational models. Tactile sensors are employed to collect data and recognize
1) Hand Gesture Capturing: Hand gestures usually are tactile discrimination of identifying the differences in texture,
represented by combination of the parameters captured by temperature and shapes of objects through touch alone [20].
glove-based systems, marker-based capturing systems and Usually haptic sensory information can be captured by either
conventionally clinical measurement tools. Vision based hand use of force displays such as PHANToM to represent actual
capturing systems are classified as an individual group par- friction force or changing friction properties of the contact
tially due to its widely usage and promising application surface. Of a variety of tactile sensors very few can be applied
future. A comprehensive review is presented to intensively to provide accurate hand motion, Finger TPS system is widely
summarize the historical development of gloved-based systems adapted with positive feedbacks as shown in Fig. 4. Each
and their application [2], though there are very few innovation FingerTPS system supports two hands with up to 6 sensors
in dataglove systems, an update is provided to focus on hand per hand, it includes the sensors, wrist-mounted interconnect
capturing oriented glove systems [16] in Table I. Maker-based harness, rechargeable wireless interface module, and USB
capturing systems are widely developed and employed for Bluetooth transceiver. In addition, Tekscan Inc. provides a
hand motion capturing in order to compensate the drawbacks wide range of FlexiForce sensors which can be tailored into
of glove-based systems. For example, surface markers are different shapes subject to individual applications.
4

TABLE I
A DVANCED D ATA G LOVE P RODUCTS S ELECTED TO H IGHLIGHT THE S TATE OF THE A RT

Device Technology Sensors and locations Precision Speed


DG5-VHand accelerometer and piezo- 6 (a 3 axes accelerometer in wrist; one 10 bit 25 Hz
resistive bend sensor per finger )
5DT Glove 14 fiber optic 14 (2 sensors per finger and abduction 8 bit Minimum
sensors between fingers) 75 Hz
X-IST Glove piezo-resistive, pressure 14 (Sensor selection: 4 Bend sensor 10 bit 60 Hz
sensor and accelerometer lenghts, 2 pressure sensor sizes, 1 two-
axis accelerometer)
CyberGlove II piezo-resistive 22 (three flexion sensors per finger, four 8 bit minimum
abduction sensors, a palm-arch sensor, 90 Hz
and sensors to measure flexion and ab-
duction)
Humanglove hall-effect sensors 20/22 (three sensors per finger and the 0.4 deg 50 Hz
dbduction sensors between fingers)
Shapehand glove bend sensors 40 ( flexions and adductions of wrist, n.a. 100 Hz
fingers and thumb) maxi-
mum

Fig. 5. Surface EMG sensors [22]


Fig. 4. Finger tactile pressure sensor

knowledge/demonstration learned from human hand motion


4) EMG Signal Capturing: Electromyography (EMG) is capabilities. The skill transfer can be considered as a three-
a technique involving testing, evaluating and recording the stage process: a) It is to construct a knowledge base, con-
activation signal of muscles including non-invasive captured sisting of human hand motion primitives and manipulation
signal, i.e., sEMG, and invasive captured signal, ie., iEMG, scenarios, from human hand manipulation demonstration or
which can be used to indirect estimate manipulation force a dataset captured from hand movements; b) given a specific
human hand applies. sEMG signals have been used mainly for hand manipulation scenario, human hand skill is applied to
interacting with machines due to its representation of global artificial hands; c) software or hardware solution to bridge
muscle activity. For instance, hand gestures are captured using the gap between constructed knowledge base and adaptation
sEMG sensors in order to evaluate and record physiologic of retrieved hand skills into artificial hands [23]. Thanks
properties of muscles at rest and while contracting [21]. On the to engineering robotics, it is relatively mature for adapting
other hand, recent innovation in the reliable and implantable motion trajectories into artificial hands [24]. It is evident
electrodes, iEMG has gained more interests in the myoelec- that it is a long-standing problem for bridge the gap of
tric control. Additionally, Delsys Inc. provides a commercial transferring hand skills to artificial hands, hardcoded methods
surface EMG sensors which provides more reliable signals are probably the only solution at an ad hoc basis in practice.
than conventional surface EMG sensors as shown in Fig. The problem has inherent connection with symbolic grounding
5. It is worthwhile noting that there are inherent difficulties in a psychological context. Priority of this section is given to
in deriving a general model of the relationship between the the core technique of how to construct motion knowledge base,
recorded EMG and muscle output force in humans performing i.e., how to recognize hand motion.
static and dynamic contractions.
A. Hand Motion Recognition
III. H AND S KILL T RANSFER
It is evident that recognizing hand motion is challenging,
Generally speaking, hand skill transfer is the process of mainly, due to coupled temporal and spatial characteristics
learning a new skill using aspects of other mastered hand- [25]. Human hand motion can be categorized into static
centred manipulation skills. In the context of this paper, hand gestures, dynamic gestures and in-hand manipulation. The
skill transfer is defined as the act of learning/retrieving a former two hand motions refer to hand poses and continuous
skill for object manipulation of artificial hands based on the hand motion. The later involves manipulating an object within
5

a 11 a 22 a 33 a 44 a 55 such as high dimension raw sensory data. Glovetalk project


firstly successfully demonstrated a neural network interface
mapping hand gestures to sign language [27]; hand gesture
a 12 a 23 a 34 a 45
S1 S2 S3 S4 S5 recognition was achieved with recognition rate of 90.5% based
Start End on Self-Growing and Self-Organized Neural Gas network [28];
State state support vector machine was also introduced to recognize hand
a 13 a 24 a 35 gestures with competitive results; a comparison of classifi-
cation methods of grasp recognition was intensively studied
V1 V2 V3 V4 [29], it introduced a systematic approach for the evaluation
of classification techniques for recognizing grasps performed
with a data glove. In particular, it distinguishes between 6
Fig. 6. Grasp recognition of five states and four observations HMM [26]
settings that make different assumptions about the user groups
and objects to be grasped. On the other hand, fuzzy systems
have been outstandingly successful in the recent years. For
one hand, in which the fingers and thumb are used to best
instance, Rainer Palm et.al. [30] employed CyberGlove to
position the object for the activity. Due to the difficulty of in-
record the human grasp data to capture grasp primitives and
hand manipulation recognition, there are very few research
modeled hand trajectories of grasp primitives by T-S fuzzy
reported. It is even more challenging to recognize in-hand
modelling, and the research results had partially implemented
motion in a real-time context. A set of algorithms had been
into prothetic hands.
developed for this purpose and achieved it at a suboptimal
Model-based methods are defined by matching the true ge-
realtime basis [26]. It is evident that difficulties confronting
ometry of the human hand, trade-off has to be handled properly
hand motion recognition can be solved efficiently by fusion
regarding to hand models and hand motion constraints in order
of multiple information sources such as tactile sensing and
to achieve suitable combination of recognition accuracy and
vision.
algorithm efficiency. For instance, Sudderth et. al. introduced
Hand gesture recognition, in general, is composed of proba-
a kinematic model in which nodes correspond to rigid bodies
bilistic graphical models, neural networks, finite state automa-
and edges to joints in comparison with deformable point
ton, rule-based reasoning and ad hoc methods. Probabilistic
distribution hand model [31]. It is accepted that a whole hand
graphical models such as hidden Markov models (HMM) has
kinematical structure can not be presented without constrains,
demonstrated its high potential in hand gesture recognition,
the primary advantage of considering the motion constraints
an example is given in Fig. 6. A survey for general gesture
among fingers and finger joints is to greatly reduce the size or
recognition can be found at [25]. In order to provide readers a
dimensions of hand gesture search space. Motion constraints
comparative overview of hand gesture recognition for the pur-
are investigated for different reduction of hand DoFs and
pose of hand skill transfer, motion recognition algorithms are
learning from a large and representative set of training samples
organised in terms of capture devices, namely marker/vision
[32]: a)limits of finger motions for static constraints; b) limits
based sensors, tactile sensors and EMG-based capture devices,
of dynamic constraints imposed on joints during motion and
in this paper. Note that recognition algorithms mimic the
limits in performing natural motion.
process how humans recognize hand gestures in principle:
Furthermore, Chua et al. reduced 27 DoFs of the human
hand gestures are naturally recognized firstly by segmenting
hand model to 12 DoFs by analyzing the hand constraints in
the gesture sequences in terms of the start and end spatial-
terms of eight different types, these eight constraints could
temporal position points of a hand gesture sequence; then
be defined as “weak constraints” in both static analysis and
humans compare the specified partitions with ‘experience’
dynamic analysis [33]. One of the drawbacks is that the these
in one’s mind, and determine the grasp types. According to
constraints could lead to invalid solution. In addition, a graph-
state of the art in hand motion recognition, this paper is
based approach was presented to understand the meaning
focused on recognition of static and dynamic gestures, in
of hand gestures by associating dynamic hand gestures with
theory, the algorithms should work for in-hand manipulation
known concepts and relevant knowledge. Hand gestures are
recognition as well though with rather poor recognition rate
understood by the comparison between concepts represented
due to complexity of in-hand motion.
by sets of hand-gesture elements and existing knowledge in
the form of conceptual graphs combining graph morphisms
B. Marker based Gesture Recognition and projection techniques [34].
Marker based capture devices usually consist of glove-based It is evident that marker based methods offer much faster
systems and capture systems such as Vicon motion capture. and more precise recognition than vision based motion recog-
Its corresponding recognition methods can be categorized into nition. It would be also interesting to see whether or not mo-
model-based methods and model-free methods. Model based tion constraints could assist model-free approaches to achieve
methods can be decomposed into those with/without hand better performance for both gesture recognition accuracy and
motion constraints. efficiency. Misusing hand motion constraints could lead to
Model-free approaches such as neural networks, rule-based rather poor performance since a large number of hand motion
methods, etc. have been effective for dealing with hand dex- constraints are difficult and computationally expensive being
terity and uncertainty occurred by a wide range of factors represented in closed forms.
6

C. Vision based Motion Recognition

Hand recognition is composed of two stages in this section:


hand motion tracking and recognition. The motion tracking
is to separate hand motion from contextual background; the
recognition algorithms in section III-B, in theory, can be Fig. 7. Graphical model based hand gesture recognition [43]
directly applied to vision based hand motion recognition. The
vision based recognition is organized in terms of static models
and dynamic models. The two types of methods are overlapped by combining static shape recognition, Kalman filter based
with generative and discriminative models of a more generally hand tracking and a HMM based temporal characterization
accepted recognition terms. scheme; A pseudo 3D Hidden Markov Model was introduced
Static methods are dominated by template matching and to recognize hand motion, which gained higher rate of recog-
its variants. 2D, 3D and depth-involved static hand models nition than a 2D HMM. [41]. Note that HMMs limited for
are categorized in terms of multiple factors such as camera handling three or more independent processes efficiently. To
types, camera viewpoints and occlusion, etc [35]. 2D hand alleviate this problem, researchers had generalized HMMs into
models usually depend on the extracted image parameters, dynamic Bayesian networks. Dynamic Bayesian networks are
which are derived from the hand image properties including directed graphical models of a stochastic process, representing
contours and edges, image moments, image eigenvectors and the hidden and observed states having complex interdepen-
other properties. Eigenvalues indicating the hand width and dencies, which can be efficiently represented by the structure
length was extracted in [36] to build a hand gesture recognition of the directed graphical models. On the other hand, finite
system for real time America Sign Language in unconstrained state automaton is a model of behavior composed of a finite
environments, while Haar-like features are extracted from 2D number of states, transitions between those states and involved
hand images to recognize hand gestures. Additionally, other actions. A finite state automaton can be represented by a state
features such as ’bunch graph’ in [37], ’conducting feature transition diagram, in which the continuous stream sensory
points’ and ’grid of neurons’ were also extracted from 2D data of a gesture are represented as a sequence of states. For
image frames. Note that 2D hand models are robust to self- example, Yeasin et al. employed a finite state automaton for
occlusion since they extracted no 3D features and directly automatically interpret a gesture accurately and also avoids
compare the 2D image properties between the input images the computationally intensive task of image sequence warping
and the registered ones, but images without shadowed fin- [42]. Note that using vision sensors only such as conventional
gers can enhance the validity of 2D hand models [38]. 3D cameras usually lead to intensive computation, marker-based
hand models may lead to the computational complexity due systems enhance precision and efficiency of gesture recogni-
to inverse kinematics and 3D reconstruction. For instance, tion.
reduction of hand DoFs was employed to efficiently estimate
3D hand postures from a set of eight 2D projected feature
points. On the other hand, static hand model has been in- D. Haptics based Motion Recognition
tensively studied using multiple cameras [39]. Not only do The human hand utilizes the distributed receptors both on
employing multiple cameras overcome the difficulties such as the surface and inside of the finger, where the stimuli changes
self/object occlusion, but it also provide a wider range of poses with the finger movement, to acquire rich information on the
and achieve more reliable 3D hand configuration. Combining manipulated object. Haptic devices have been introduced to
multiple view information usually refers to: a) group all the capture such information in the dynamic interaction, which
shape contexts from all the images together before clustering can also be used as the unique/additional motion feature for
to build the histograms; b) estimate the pose from each view discriminating the hand motions [44]. Haptic devices have
individually and combine the results at a high level using also been integrated with data gloves to collect multiform
graphical models; c) the information is fused in hybrid terms of information such as haptic data glove and SlipGlove [45].
the other two multi-view combination. Additionally, infrared These gloves with haptic-IO capability provide the motion
cameras are used to retrieve depth information in 3D hand information of human hands and enhance the capturing of
reconstruction via techniques such as stereo triangulation, human hand skills. Kondo et al. used contact state transition
sheet of light triangulation, coded aperture, etc. to recognize complex hand motions captured by Cyberglove
Dynamic hand models involving spatial-temporal features and a tactile sensor sheet, Nitta BIG-MAT quarter, attached to
can be generated by a range of methods such as probabilistic the lateral side of the cylindrical object [46]. Kawasaki et al.
graphical models, finite state automaton, etc. [40], an example employed hand motions consisting of contact points, grasped
is given as in Fig. 7. It is practically feasible to adapt static force, hand and object positions to explore the maximized
hand models into dynamic models by introducing temporal pa- manipulability a robotic hand by scaling the virtual hand
rameters into hand modelling. Probabilistic graphical models, model [44].
e.g., HMM, and finite state automaton are dominant methods On the other hand, the contact interaction between the finger
for dynamic models. HMMs was proposed to model dynamic tip and the object surface associates the different manipulated
process in nature. Selected HMM-based research had been objects with different hand motions. Some researchers have
reported as follows [19]. A recognition strategy was proposed studied the object recognition with complicated static/dynamic
7

tactile pattern obtained from multi-contact, finally to differen- that evolved from mathematical models of neurons and sys-
tiate the manipulations [47]. Dynamic haptic pattern refers to tems of neurons, neural network is becoming one of the
the time serial of the haptic sensing change and is capable most useful methods. Other neural network based classification
of providing more haptic information than static haptic pat- algorithms include log-linearized Gaussian mixture network,
tern during the interactions between the hand and objects. probabilistic neural network, fuzzy mean max neural network
Watanabe et al. utilized tactile spatiotemporal differential and radial basis function artificial neural network [55]. The
information with a soft tactile sensor array attached to a statistic classifiers such as HMM, Gaussian mixture model,
robot hand to classify object shapes [48]. While Hosoda et al. support vector machine have also been used intensively in
described the learning of robust haptic recognition of a bionic sEMG recognition. A few studies has compared several dif-
hand using regrasping strategy and neural network, through ferent methods [56], e.g., Claudio et al. [57] reported support
the dynamic interaction with the objects [49] . vector machine achieved a higher recognition in compari-
One of the challenges remain in haptics based recognition, son with neural networks and locally weighted projection
however, is how to model hand friction and its effects. Existing regression, while Liu proposed Cascaded Kernel Learning
approaches are based on point-contact friction models, either Machine which has been compared to other classifers such
the point friction model, cone friction model or compensating as k-nearest neighbour, multilayer neural network and support
friction via control strategy. Both models are idealistic for vector machine [58]. However, none of them has explained
practical robot manipulation in that the contact between an why the performance is enhanced. In addition, there is a lack of
object and the corresponding robot hand is a surface instead consideration of sEMG’s uncertainties such as non-stationary
of a point. Force distributed sensors are introduced to capture nature, muscle wasting, electrode position, different subjects
the magnitude of the force applied on a surface, but most and temperature impact. Muscle wasting or muscle fatigue
of them are incapable of achieving the force directions [50]. can be considered as the decrease in the force generating
The area and the complexity of the hand interacting space capacity of a muscle and has been evidenced in numerous
is limited due to the small size and resolution of the haptic studies. For the same hand motion, muscle fatigue results in
sensors. Additionally, locality of the haptic sensor is another a different sEMG signal which may cause a failure of the
challenge, which depends heavily on the contact conditions recognition method. Electrode position is also critical for the
[51]. Although compliant joints and soft finger tips [49] have valid sEMG signal and leads to estimates of sEMG variables
been proposed to simulate the hand soft tissues, it is still an that are different from those obtained in other nearby locations
open problem to imitate the non-linearly viscoelastic property [59]. Temperature is additionally proved to have an important
possessed by the finger soft tissues including inner skin and effect on the nerve conduction velocities and muscle actions
subcutaneous tissue. [60]. These uncertainties need more consideration in extracting
sEMG’s features which are determinant to the performance of
classifiers.
E. EMG based Hand Motion Recognition
There are two types of electromyogram: the intramuscu- IV. H AND -C ENTRED A PPLICATIONS
lar EMG, nEMG, and surface EMG, sEMG. The former In recent decades, with the developments and innovations
involves inserting a needle electrode through the skin into in motor technology and robotics, exciting fruit has been seen
the muscle whose electrical activity is to be measured; the in the design of physically artificial hands aiming to improve
latter refers placing the electrodes on the skin overlying the the flexibility of robotized systems. Artificial hands can be
muscle to detect the electrical activity of the muscle. nEMG generally categorized into mechanical grippers, robotic hands,
is predominantly used to evaluate motor unit function in rehabilitation hands and prosthetic hands. The mechanical
clinical neurophysiology. nEMG can provide focal recordings grippers, which are usually different from the mechanism
from deep muscles and independent signals relatively free of the human hand, have been widely used in industrial
of crosstalk. Due to the improvement of the reliable and applications for fast and effectively grasping and handling
implantable electrodes, the use of nEMG for human hand a limited set of known objects [61], [62]. They are usually
movement studies has been more explored. Farrell et al. designed for a specific task, executing a preprogrammed mo-
[52] presented that the intramuscular electrodes has the same tion trajectory and featuring low anthropomorphism and low
performance as surface electrodes on pattern classification manipulation capability [63]. Robotic hand and prosthetic hand
accuracy for prosthesis control. Ernest et al. [53] proved that are anthropomorphically designed to mimic the performance
a selective iEMG recording is representative of the applied of human hands, targeting to learn human hand skills with
grasping force and can potentially be suitable for proportional an adaption in dynamic unstructured environment and even
control of prosthetic devices. be competent for the work which human is incapable of. A
sEMG signals have been used as a dominant method of survey on artificial hands focusing on manipulative dexterity,
interaction with machines [54]. In an EMG-based interaction grasp robustness and human operability can be found at [64],
system, hand gestures are captured using sEMG sensors which an attempt to summarize up-to-date applications of artificial
evaluate and record physiologic properties of muscles at rest hands is provided as below.
and while contracting [21]. Various classification methodolo- Various anthropomorphic robotic hands had been developed
gies have been proposed for processing and discriminating or improved in the past ten years. Not only does the anthro-
sEMG signals for hand motions. As a computation technique pomorphic design make the skill transfer from human hand
8

anthropomorphic hands, for obvious aesthetic as well as func-


tional reason [63], prosthetic hand has gain intensive attention
in the past decade. The available commercial prosthetic hands
mainly include Myohand from Otto Bock Ltd, I-limb from
Touch Bionics [69] and Bebionic hand [70] in Fig. 9. They are
usually controlled by sEMG signals extracted from the users’
residual functional muscles. The users need a period of time
to get used to the system, trying to generate the recognizable
muscle contractions. So far only a few simple but robust
(a) (b) (c) motions, such as opening and closure, have been applied in the
systems, though i-Limb can generate different hand gestures
Fig. 8. Robotic hands:(a) DLR-HIT hand; (b) Robonaut Hand; (c) Shadow
hand C6M by different combination of the active actuators. Changeable
force can be carried out to grasp objects with different weights
using limited actuators and surface electrodes. Though the con-
to robotic hand easier, but it also intend to equip the artificial trol is simple and robust, a large amount of the amputees do not
hands with the natural heritage of adaption in human living use their prosthetic hand regularly mainly due to the weight,
environment. The newly developed or improved robotic hand appearance/cosmetics and functionality [71]. In addition, there
have more haptic sensors/feedbacks, such as DLR-HIT and are no haptic or proprioceptive feedback to the subject in these
GIFU III hand with 6-axis force sensor and Shadow hand prosthetic hands. Tasks are carried out automatically by the
with force and temperature sensor on the finger tip [65], [66]. pre-programmed or pre-mapped desired trajectory triggered by
They are relatively smaller and lighter but more powerful, the continuous results of the EMG classifier. The only sensory
e.g., “Smart motor” actuation system has been introduced feedback is the user’s direct vision, in which the user can
to Shadow hand C6M instead of the pneumatic air muscle stop or reset the task if it is not successful [72]. Besides the
actuation system, and integrated force and position control commercial prosthetic hands, significant examples can also be
electronics, motor drive electronics, motor, gearbox, force seen in the research applications: Cyberhand, DEKA arm, AR
sensing and communications into a compact unit. The appear- hand and so on. Since it is evident that force feedback system
ance has also been improved, for example, the bionic hand has a wide variety of effects on the users [73], force sensors
in [49] has soft skin with distributed receptors. These robotic have been widely applied to prosthetic hands by means of
hands can be generally categorized in two types: dexterous vibrotactile or electrotactiles stimulation. For example, a force
hands and under-actuated hands in terms of the number of resistive sensor has been integrated in the index finger and the
actuators of a finger. Dexterous hands has multiple actuators thumb of FLUIDHAND III respectively, whose signals are
for each finger, which enables the robotic hand to control every used to stimulate a vibration motor attached to users’ skin [74].
degree of freedom by individual actuator. For example in the The user can sense the grasping force through the strength of
DLR-HIT hand has four fingers with four joints and three the vibration.
actuators each as show in Fig. 8. However, the large amount of It should be noted that it is relatively mature applications
actuators controlling DoFs makes the automatic determination of artificial hands in virtual reality and gaming in comparison
of their movement very difficulty, and the high dimensional with physically artificial hands, for which one of the main
search space for a valid joint path makes the computational reasons is that animating virtual hands has a lot less constraints
cost extremely high [67]. Under-actuated hands have been such as mechanical constraints. Vision-based methods have
developed as one of the solutions for such problem, utilizing been proposed to estimate joint locations and creases on the
less actuators in one finger than its degree of freedom. Such a palmar surface based on extracted features and analysis of sur-
mechanism is aimed to decrease the number of active degrees face anatomy [75]. It is expected that simulating natural hand
of freedom by means of connected differential mechanisms motion and producing more realistic hand animation would
in the system [68]. It is still a bottleneck problem of how to be of a great help to physically artificial hand manipulation
achieve a proper mechanical design ensuring both the hand and synthesis of the sign languages. Since the human hand
effectiveness and efficiency. is an important interface with complex shape and movement,
it is also expected that the use of an individualized rather
than generic hand representation can increase the sense of
immersion and in some cases may lead to more effortless and
accurate interaction with the virtual world.

V. C ONCLUDING R EMARKS

(a) (b) (c) Programming multifingered artificial hands to solve com-


plex manipulation tasks has remained an elusive goal. The
Fig. 9. Prosthetic hands: (a) Myohand; (b) Ilimb; (c) Bebionic Hand complexity and unpredictability of the interactions of multiple
effectors with objects is an important reason for this difficulty.
As one of the first application fields envisaged for artificial The paper has reviewed the cycle of hand skill transfer from
9

analyzing the human hands to hand-centred applications. The [11] A. Quagli, D. Fontanelli, D. Greco, L. Palopoli, and A. Bicchi, “Design
primary challenge that researchers now confront is how to of embedded controllers based on anytime computing,” IEEE Transac-
tions on Industrial Informatics, vol. 6, no. 4, pp. 492–502, 2010.
enable artificial hands to use what can be learned from human [12] A. Chella, H. Džindo, I. Infantino, and I. Macaluso, “A posture sequence
hands, to manipulate objects, with the same degree of skill learning system for an anthropomorphic robotic hand,” Robotics and
and delicacy as human hands. The challenging issues in hand- Autonomous Systems, vol. 47, no. 2-3, pp. 143–152, 2004.
[13] M. Carrozza, G. Cappiello, S. Micera, B. Edin, L. Beccai, and C. Cipri-
centred research can be summarized as: a) there is lack of ani, “Design of a cybernetic hand for perception and action,” Biological
generalized framework which dynamically merges hybrid rep- cybernetics, vol. 95, no. 6, pp. 629–644, 2006.
resentations for hand motion description rather than employing [14] S. Calinon, F. Guenter, and A. Billard, “On learning, representing, and
hardcoded methods. For instance, how to autonomously inter- generalizing a task in a humanoid robot,” IEEE Transactions on Systems,
Man and Cybernetics, Part B, vol. 37, no. 2, pp. 286–298, 2007.
pret the semantic meanings of a hand motion; b) there does [15] G. R. Bradberry, T.J. and J. Contreras-Vidal, “Reconstructing Three-
not exist a scheme which provides hand sensory feedback to Dimensional Hand Movements from Non-Invasive Electroencephalo-
artificial hands. For instance, it is required a sensor or model graphic Signals,” Journal of Neuroscience, in press.
[16] R. Gentner and J. Classen, “Development and evaluation of a low-cost
which generates necessary sensory feedback information for sensor glove for assessment of human finger movements in neurophys-
haptics perception understanding, view-invariant hand motion iological settings,” Journal of Neuroscience Methods, vol. 178, no. 1,
of artificial hands, e.g. a feasible slip model; c) existing pp. 138–147, 2009.
[17] C. Metcalf, S. Notley, P. Chappell, J. Burridge, and V. Yule, “Validation
algorithms fail to recognize and model human manipulation in- and Application of a Computational Model for Wrist Movements Us-
tention due to a variety of uncertainties, e.g., quality of sensory ing Surface Markers,” IEEE Transactions on BIomedical Engineering,
information, individual manipulation habits and clinical issues; vol. 55, no. 3, pp. 1199–1210, 2008.
d) It is evident that reliable interfacing/implanting into the [18] T. Moeslund and E. Granum, “A survey of computer vision-based human
motion capture,” Computer Vision and Image Understanding, vol. 81,
peripheral sensory nervous system and contextual information no. 3, pp. 231–268, 2001.
such as environmental models is missing for further bridging [19] X. Ji and H. Liu, “Advances in view-invariant human motion analysis:
the gap in artificial hands and human/environment interaction; A review,” IEEE Transactions on Systems, Man and Cybernetics, Part
C, vol. 40, no. 1, pp. 13–24, 2010.
e) feasible embedded algorithms is crucial, in terms of sen- [20] C. Metcalf and S. Notley, “Modified Kinematic Technique for Measuring
sory hand/context information fusion, to have artificial hands Pathological Hyperextension and Hypermobility of the Interphalangeal
operational and functional in human environments [9], [76]. It Joints,” IEEE Transactions on Biomedical Engineering, in press.
[21] C. Fleischer and G. Hommel, “Calibration of an EMG-Based Body
is expected that this paper has provided a relatively unified Model with six Muscles to control a Leg Exoskeleton,” Proc. Interna-
feasibility account for problems, challenges and technical tional Conference on Robotics and Automation, pp. 2514–2519, 2007.
specifications for hand-centred research, considering all the [22] “Delsys inc.” http://www.delsys.com.
major disciplines involved, namely, the human hand analysis [23] H. Liu, “A Fuzzy Qualitative Framework for Connecting Robot Qual-
itative and Quantitative Representations,” IEEE Transactions on Fuzzy
and synthesis, hand motion capture, hand skill transfer and Systems, vol. 16, no. 6, pp. 1522–1530, 2008.
hand-centred applications. This account of the-state-of-the- [24] T. Yoshikawa, “Multifingered robot hands: Control for grasping and
art presented has also provided partial insights into an in- manipulation,” Annual Reviews in Control, vol. 34, pp. 199–208, 2010.
[25] S. Mitra and T. Acharya, “Gesture Recognition: A Survey,” IEEE
depth understanding of human perception-action and potential transactions on Systems, Man and Cybernetics, Part C, vol. 37, no. 3,
healthcare solutions. pp. 311–324, 2007.
[26] Z. Ju and H. Liu, “A unified fuzzy framework for human hand motion
recognition,” IEEE Transactions on Fuzzy Systems, in press.
R EFERENCES [27] S. Fels and G. Hinton, “Glove-talkii - a neural-network interface which
maps gestures to parallel format speech synthesizer controls,” IEEE
[1] N. Palastanga, D. Field, and R. Soames, “Anatomy and Human Move- Transactions Neural Networks, vol. 9, no. 1, pp. 205–212, 1998.
ment - Structure and Function, Fifth Edition,” Elsevier Press, 2006.
[28] E. Stergiopoulou and N. Papamarkos, “Hand gesture recognition using
[2] L. Dipietro, A. Sabatini, and P. Dario, “A Survey of Glove-Based
a neural network shape fitting technique,” Engineering Applications of
Systems and Their Applications,” IEEE Transactions on Systems, Man
Artificial Intelligence, vol. 22, pp. 1141–1158, 2009.
and Cybernetics, Part C, vol. 38, no. 4, pp. 461–482, 2008.
[29] G. Heumer, H. Amor, and B. Jung, “Grasp recognition for uncalibrated
[3] B. Argall and A. Billard, “A Survey of Tactile Human-Robot Inter-
data gloves: A machine learning approach,” PRESENCE: Teleoperators
actions,” Robotics and Autonomous Systems, vol. 58, pp. 1159–1176,
and Virtual Environments, vol. 17, no. 2, pp. 121–142, 2008.
2010.
[4] A. Muzumdar, “Powered upper limb prostheses,” Springer, p. 208, 2004. [30] R. Palm, B. Iliev, and B. Kadmiry, “Recognition of human grasps by
[5] Y. Matsuoka, P. Afshar, and M. Oh, “On the design of robotic hands for time-clustering and fuzzy modeling,” Robotics and Autonomous Systems,
brain–machine interface,” Neurosurgical Focus, vol. 20, no. 5, pp. 1–9, vol. 57, no. 5, pp. 484–495, 2009.
2006. [31] E. Sudderth, M. Mandel, W. Freeman, and A. Willsky, “Visual Hand
[6] X. Zhu and H. Ding, “Computation of force-closure grasps: An iterative Tracking Using Nonparametric Belief Propagation,” Proc. International
algorithm,” IEEE Transactions on Robotics, vol. 22, no. 1, pp. 172–179, Conference on Computer Vision and Pattern Recognition, pp. 189–189,
2006. 2004.
[7] ——, “An efficient algorithm for grasp synthesis and fixture layout [32] J. Lin and T. Wu, “Modeling the Constraints of Human Hand Motion,”
design in discrete domain,” IEEE Transactions on Robotics, vol. 23, Urbana, vol. 51, no. 61, pp. 801–809.
no. 1, pp. 157–163, 2007. [33] C. Chua, H. Guan, and Y. Ho, “Model-based 3D hand posture estimation
[8] G. Gomez, A. Hernandez, P. Eggenberger Hotz, and R. Pfeifer, “An from a single 2D image,” Image and Vision Computing, vol. 20, no. 3,
adaptive learning mechanism for teaching a robot to grasp,” Proc. pp. 191–202, 2002.
International Symposium on Adaptive Motion of Animals and Machines, [34] B. Miners, O. Basir, and M. Kamel, “Understanding hand gestures using
pp. 1–8, 2005. approximate graph matching,” IEEE Transactions on Systems, Man and
[9] A. Malinowski and H. Yu, “Comparison of embedded system design Cybernetics, Part A, vol. 35, no. 2, pp. 239–248, 2005.
for industrial application,” IEEE Transactions on Industrial Informatics, [35] C. Chan and H. Liu, “Fuzzy qualitative human motion analysis,” IEEE
vol. 7, no. 2, pp. 244–254, 2011. Transactions on Fuzzy Systems, vol. 17, no. 4, pp. 851–862, 2009.
[10] S. Fischmeister and P. Lam, “Time-aware instrumentation of embedded [36] N. Binh, E. Shuichi, and T. Ejima, “Real-Time Hand Tracking and Ges-
software,” IEEE Transactions on Industrial Informatics, vol. 6, no. 4, ture Recognition System,” Proc. International Conference on Graphics,
pp. 652–663, 2010. Vision and Image, pp. 362–368, 2005.
10

[37] J. Triesch and C. von der Malsburg, “Classification of hand postures [60] J. Feinberg, “EMG: Myths and Facts,” HSS Journal, vol. 2, no. 1, pp.
against complex backgrounds using elastic graph matching,” Image and 19–21, 2006.
Vision Computing, vol. 20, no. 13-14, pp. 937–943, 2002. [61] X. Zhu, H. Ding, and M. Wang, “A numerical test for the closure prop-
[38] S. Ge, Y. Yang, and T. Lee, “Hand gesture recognition and tracking based erties of 3-d grasps,” IEEE Transactions on Robotics and Automation,
on distributed locally linear embedding,” Image and Vision Computing, vol. 20, no. 3, pp. 543–549, 2004.
vol. 26, no. 12, pp. 1607–1620, 2008. [62] X. Zhu and J. Wang, “Synthesis of force-closure grasps on 3-d objects
[39] B. Stenger, A. Thayananthan, P. Torr, and R. Cipolla, “Estimating 3D based on the q distance,” IEEE Transactions on Robotics and Automa-
hand pose using hierarchical multi-label classification,” Image and Vision tion, vol. 19, no. 4, pp. 669–679, 2003.
Computing, vol. 25, no. 12, pp. 1885–1894, 2007. [63] L. Zollo, S. Roccella, E. Guglielmelli, M. Carrozza, and P. Dario,
[40] A. Just and S. Marcel, “A comparative study of two state-of-the-art “Biomechatronic design and control of an anthropomorphic artificial
sequence processing techniques for hand gesture recognition,” Computer hand for prosthetic and robotic applications,” IEEE/ASME Transactions
Vision and Image Understanding, vol. 113, no. 4, pp. 532–543, 2009. on Mechatronics, vol. 12, no. 4, pp. 418–429, 2007.
[41] N. Binh and T. Ejima, “Real-Time hand Gesture Recognition Using [64] A. Bicchi, “Hands for dexterous manipulation and robust grasping: A
Pseudo 3-D Hidden Markov Model,” Proc. International Conference on difficult road toward simplicity,” IEEE Transactions on Robotics and
Cognitive Informatics, vol. 2, pp. 1–8, 2006. Automation, vol. 16, no. 6, pp. 652–662, 2002.
[42] M. Yeasin and S. Chaudhuri, “Visual understanding of dynamic hand [65] H. Liu, P. Meusel, N. Seitz, B. Willberg, G. Hirzinger, M. Jin, Y. Liu,
gestures,” Pattern Recognition, vol. 33, no. 11, pp. 1805–1817, 2000. R. Wei, and Z. Xie, “The modular multisensory DLR-HIT-Hand,”
[43] J. Lin, “Visual hand tracking and gesture analysis,” Ph.D. Dissertation, Mechanism and Machine Theory, vol. 42, no. 5, pp. 612–625, 2007.
University of Illinois at Urbana-Champaign, 2004. [66] H. Kawasaki, T. Komatsu, and K. Uchiyama, “Dexterous anthropo-
[44] H. Kawasaki, T. Furukawa, S. Ueki, and T. Mouri, “Virtual Robot morphic robot hand with distributed tactile sensor: Gifu hand II,”
Teaching Based on Motion Analysis and Hand Manipulability for Multi- IEEE/ASME Transactions on Mechatronics, vol. 7, no. 3, pp. 296–303,
Fingered Robot,” Journal of Advanced Mechanical Design, Systems, and 2002.
Manufacturing, vol. 3, no. 1, pp. 1–12, 2009. [67] S. Sun, C. Rosales, and R. Suarez, “Study of coordinated motions of
[45] Z. Wang, J. Yuan, and M. Buss, “Modelling of human haptic skill: A the human hand for robotic applications,” Proc. IEEE International
framework and preliminary results,” Proc. 17th IFAC World Congress, Conference on Information and Automation, pp. 776–781, 2010.
pp. 1–8, 2008. [68] T. Laliberte and C. Gosselin, “Simulation and design of underactuated
[46] M. Kondo, J. Ueda, and T. Ogasawara, “Recognition of in-hand manipu- mechanical hands,” Mechanism and Machine Theory, vol. 33, no. 1-2,
lation using contact state transition for multifingered robot hand control,” pp. 39–57, 1998.
Robotics and Autonomous Systems, vol. 56, no. 1, pp. 66–81, 2008. [69] C. Connolly, “Prosthetic hands from Touch Bionics,” Industrial Robot:
An International Journal, vol. 35, no. 4, pp. 290–293, 2008.
[47] N. Gorges, S. Navarro, D. Goger, and H. Worn, “Haptic object
[70] Bebionic, “Bebionic hand,” http://www.bebionic.com/, 2011.
recognition using passive joints and haptic key features,” Proc. IEEE
[71] P. Kyberd, C. Wartenberg, L. Sandsoj, S. Jonsson, D. Gow, J. Frid,
International Conference on Robotics and Automation, pp. 2349–2355,
C. Almstrom, and L. Sperling, “Survey of upper-extremity prosthesis
2010.
users in Sweden and the United Kingdom,” Journal of Prosthetics and
[48] K. Watanabe, K. Ohkubo, S. Ichikawa, and F. Hara, “Classification
Orthotics, vol. 19, no. 2, pp. 55–62, 2007.
of Prism Object Shapes Utilizing Tactile Spatiotemporal Differential
[72] C. Cipriani, F. Zaccone, S. Micera, and M. Carrozza, “On the shared con-
Information Obtained from Grasping by Single-Finger Robot Hand
trol of an EMG-controlled prosthetic hand: analysis of user–prosthesis
with Soft Tactile Sensor Array,” Journal of Robotics and Mechatronics,
interaction,” IEEE Transactions on Robotics, vol. 24, no. 1, pp. 170–184,
vol. 19, no. 1, pp. 85–96, 2007.
2008.
[49] K. Hosoda and T. Iwase, “Robust haptic recognition by anthropomorphic
[73] C. Pylatiuk, A. Kargov, and S. Schulz, “Design and evaluation of a low-
bionic hand through dynamic interaction,” Proc. IEEE/RSJ International
cost force feedback system for myoelectric prosthetic hands,” Journal
Conference on Intelligent Robots and Systems, pp. 1236–1241, 2010.
of Prosthetics and Orthotics, vol. 18, no. 2, pp. 57–61, 2006.
[50] K. Sato, K. Kamiyama, N. Kawakami, and S. Tachi, “Finger-Shaped [74] I. Gaiser, C. Pylatiuk, S. Schulz, A. Kargov, R. Oberle, and T. Werner,
GelForce: Sensor for Measuring Surface Traction Fields for Robotic “The FLUIDHAND III: A Multifunctional Prosthetic Hand,” Journal of
Hand,” IEEE Transactions on Haptics, vol. 3, no. 1, pp. 37–47, 2010. Prosthetics and Orthotics, vol. 21, no. 2, pp. 91–97, 2009.
[51] S. Takamuku, A. Fukuda, and K. Hosoda, “Repetitive grasping with [75] T. Rhee, U. Neumann, and J. Lewis, “Human Hand Modeling from
anthropomorphic skin-covered hand enables robust haptic recognition,” Surface Anatomy,” Proc. Symposium on Interactive 3D Graphics and
Proc. IEEE/RSJ International Conference on Intelligent Robots and Games, pp. 1–6, 2006.
Systems, pp. 3212–3217, 2008. [76] N. Motoi, M. Ikebe, and K. Ohnishi, “Real-time gait planning for
[52] T. Farrell and R. Weir, “A comparison of the effects of electrode im- pushing motion of humanoid robot,” IEEE Transactions on Industrial
plantation and targeting on pattern classification accuracy for prosthesis Informatics, vol. 3, no. 2, pp. 154–163, 2007.
control.” IEEE Transactions on Biomedical Engineering, vol. 55, no. 9,
pp. 2198–2211, 2008.
[53] E. N. Kamavuako, D. Farina, K. Yoshida, and W. Jensen, “Relationship
between grasping force and features of single-channel intramuscular emg
signals,” Journal of Neuroscience Methods, vol. 185, no. 1, pp. 143–150,
2009.
[54] K. Wheeler, M. Chang, K. Knuth, N. Center, I. Div, and C. Moffett Field,
“Gesture-based control and EMG decomposition,” IEEE Transactions Honghai Liu (M’02-SM’06) received his Ph.D de-
on Systems, Man and Cybernetics, Part C, vol. 36, no. 4, pp. 503–514, gree in robotics from King’s college London, UK,
2006. in 2003. He joined the University of Portsmouth,
[55] F. Mobasser and K. Hashtrudi-Zaad, “A method for online estimation UK in September 2005. He previously held re-
of human arm dynamics,” Proc. IEEE International Conference on search appointments at the Universities of London
Engineering in Medicine and Biology Society, vol. 1, pp. 2412–2416, and Aberdeen, and project leader appointments in
2006. large-scale industrial control and system integration
[56] S. Kawano, D. Okumura, H. Tamura, H. Tanaka, and K. Tanno, industry.
“Online learning method using support vector machine for surface- Dr Liu has published over 250 peer-reviewed
electromyogram recognition,” Artificial Life and Robotics, vol. 13, no. 2, international journals and conference papers includ-
pp. 483–487, 2009. ing four best paper awards. He is interested in
[57] C. Castellini and P. van der Smagt, “Surface EMG in advanced hand approximate computation, pattern recognition, intelligent video analytics and
prosthetics,” Biological Cybernetics, vol. 100, no. 1, pp. 35–47, 2009. cognitive robotics and their practical applications with an emphasis on
[58] Y. Liu, H. Huang, and C. Weng, “Recognition of electromyographic sig- approaches which could make contribution to the intelligent connection of
nals using cascaded kernel learning machine,” IEEE/ASME Transactions perception to action using contextual information. He is Associate Editor of
on Mechatronics, vol. 12, no. 3, pp. 253–264, 2007. IEEE Transactions on Industrial Informatics, IEEE Transactions on Systems,
Man and Cybernetics, Part C and International Journal of Fuzzy Systems.
[59] L. Mesin, R. Merletti, and A. Rainoldi, “Surface EMG: The issue
of electrode location,” Journal of Electromyography and Kinesiology,
vol. 19, no. 5, pp. 719–726, 2009.

You might also like